I have inherited a web service that uses asynchronous calls for various things, but I have run into an issue that I can't rightfully explain.
I have (basically) the following workflow.
public CustomObject Work()
{
//synchronous work omitted
ContinueWithMethod1();
return retval;
}
public void ContinueWithMethod1()
{
ContinueWithMethod2().ContinueWith(x=>
{
if(x.IsFaulted) { //log error }
});
}
public Task<CustValue> ContinueWithMethod2()
{
<AzureDocumentDBLibrary>.UpsertDocumentAsync(value, value).ContinueWith(y =>
{
if(y.IsFaulted) throw y.Exception;
//populate CustValue
return retval;
}
);
}
The idea here is that the call to ContinueWithMethod1() inside of the Work() method is "fire and forget" so that execution passes to the return retval; statement immediately after calling ContinueWithMethod1(). If I put breakpoints on ContinueWithMethod1() and ContinueWithMethod2() the breakpoints are hit as I would expect and I don't see any errors generated in Visual Studio. The problem is that my call to Azure doesn't upload anything.
Some additional points:
If I execute this logic twice in a row the second call succeeds every time, but the first always fails.
If I change ContinueWithMethod1() to match the code sample below, everything works fine on the first pass through.
Code sample follows
public async void ContinueWithMethod1()
{
await ContinueWithMethod2();
}
The problem I think I have here is that depending on how long ContinueWithMethod2() my webrequest connection between the user and the server could time out.
Has anyone ever run into this problem with nested ContineWith() statements? If so can you explain what is going on and maybe how to debug the sequence of events that result in my first call to Azure being lost?
Related
I have what I think is a fairly basic flow in an asychronous web api controller. The code looks like the following:
public async Task<IHttpActionResult> Put([FromBody] ObjectType myObject)
{
if (!ModelState.IsValid)
{
return BadRequest(ModelState);
}
try
{
this.callbasicMethod();
myObject = await myRepository.UpdateDB(myObject);
await myRepository.DeleteSomeStuff(myObject.someProperty);
var table = Helper.CreateDataTable(myObject.anotherProperty);
await myRepository.InsertSomeStuff(table);
returnOk(myObject);
}
catch (Exception ex)
{
return BadRequest(ex.Message);
}
}
The problem is that none of the database calls (importantly the update call) ever execute. If I put a break point in this method around the update call, everything works just fine. It is like some sort of race condition is happening or something. Please let me know if you have an idea on how to fix this issue or what I am doing wrong.
Also, please let me know if you need any clarification, I had to obviously obfuscate the code to protect the intellectual property of the company I work for. If it helps any, the methods that are being called are implemented asynchronously themselves calling into asynchronous dapper methods to communicate with the database.
I finally found a work around, but not a true answer to why. Basically the two database calls that were being called were to delete some data from the table and then after that had been done add some data to the same table. I wrote one stored procedure to handle this and then one method within my data layer of my application and now everything is working.
I recently ran into a problem where I was developing an API which talked to two data sources in some methods. The POST for a couple methods modified SQL data through the use of entity framework as well a data source using as an old SDK that was STA COM based. To get the STA COM SDK code to work correctly from within the API methods, I had to create method attributes that identified the methods as needing to be single threaded. I forced single threading by overriding the InvokeActionAsync() method from ApiControllerActionInvoker. If a method was not given an attribute to be single threaded, the overridden invoker simply used the normal base class InvokeActionAsync().
public class SmartHttpActionInvoker: ApiControllerActionInvoker
{
public override Task<HttpResponseMessage> InvokeActionAsync(HttpActionContext context, CancellationToken cancellationToken)
{
// Determine whether action has attribute UseStaThread
bool useStaThread = context.ActionDescriptor.GetCustomAttributes<UseStaThreadAttribute>().Any();
// If it doesn't, simply return the result of the base method
if (!useStaThread)
{
return base.InvokeActionAsync(context, cancellationToken);
}
// Otherwise, create an single thread and then call the base method
Task<HttpResponseMessage> responseTask = Task.Factory.StartNewSta(() => base.InvokeActionAsync(context, cancellationToken).Result);
return responseTask;
}
}
public static class TaskFactoryExtensions
{
private static readonly TaskScheduler _staScheduler = new StaTaskScheduler(numberOfThreads: 1);
public static Task<TResult> StartNewSta<TResult>(this TaskFactory factory, Func<TResult> action)
{
return factory.StartNew(action, CancellationToken.None, TaskCreationOptions.None, _staScheduler);
}
}
public static void Register(HttpConfiguration config)
{
....
config.Services.Replace(typeof(IHttpActionInvoker), new SmartHttpActionInvoker());
...
}
This worked well until I noticed something odd. My Logging database was logging duplicate records when a method NOT marked as single threaded was throwing a HttpResponseException back to the client. This behavior did not exist when the same method returned OK().
Debugging, I noticed the code execute in the API method, then reach the throw statement. The next line after the exception was thrown to be shown in debugger was the InvokeActionAsync() code I wrote. Following this the method was run again, in full, hitting the thrown exception, the action invoker, and then returning the result to the client. Effectively, it appears my use of overriding the InvokeActionAsync causes the Action invoker to be called twice somehow... but I am not sure how.
EDIT: Confirmed that the System.Threading.Thread.CurrentThread.ManagedThreadId for the current thread when it is thrown and logged is different for each execution of the API method. So, this reinforces my belief two threads are being created instead of one. Still not sure why.
Anyone have any experience with overriding the InvokeActionAsync behavior that might be able to explain this behavior? Thanks!
I am trying to load a document out of RavenDb via a WebAPI call. When I open an async IDocumentSession and call LoadAsync, I get no exception or result, and the thread exits instantly with no error code.
I was able to bypass all the structure of my API and reproduce the error.
Here is the code that will not work:
public IHttpActionResult GetMyObject(long id)
{
try
{
var session = RavenDbStoreHolderSingleton.Store.OpenAsyncSession();
var myObject= session.LoadAsync<MyObject>("MyObject/1").Result;
return Ok(myObject);
}
catch (Exception e)
{
return InternalServerError(e);
}
}
I simply hard coded the object's Id to 1 for testing, but calling the function for an object that doesn't exist (such as "MyObject/1") has the same result.
However, this code works:
public async Task<IHttpActionResult> GetMyObject(long id)
{
try
{
var session = RavenDbStoreHolderSingleton.Store.OpenAsyncSession();
var myObject= await session.LoadAsync<MyObject>("MyObject/1");
return Ok(myObject);
}
catch (Exception e)
{
return InternalServerError(e);
}
}
Things I tried/fiddled with:
Changing the exceptions that are caught in debugging
Carefully monitoring Raven Studio to see if I could find any problems (I didn't, but I'm not sure I was looking in the right places)
Running the API without the debugger attached to see if the error occurred or if something showed up in Raven Studio (no changes)
So I guess I have stumbled on a "fix", but can someone explain why one of these would fail in such an odd way while the other one would work perfectly fine?
In the real application, the API call did not have the async/await pair, but the code that was making the call was actually using async/await.
Here is the repository class that was failing which caused me to look into this issue:
public async Task<MyObject> Load(string id)
{
return await _session.LoadAsync<MyObject>(id);
}
The first part that is failing is as per design, for ASP.Net async call, you are blocking the Synchronization context, when you call the Result on a Task returned and same Synchronization context is required for call to return the data. Check out the following link by Stephen Cleary, where the same mechanism is explained in detail.
Second part works since that is correct way of using it and it's not getting into the deadlock anymore. First part can only work if you are using the Console application, which doesn't have a synchronization context to block, even other UI like winforms will have a similar issue and need to use the use the Second part of the code
I need to call several methods from an external framework - or rather I am writing a wrapper around it for other users to call methods from this framework in a non-predetermined order. Now some methods of the framework will throw exceptions, even though no "real" error occured. Basically they are supposed to be internal exceptions just to notify whoever that the action to be performed has already been performed before. For example: that a file has been loaded. It wont hurt to load the file another time, so for all I care this "error" is no error at all. So I need to continue on this exception, but I also need to catch other, real exceptions, such as when the framework, which connects to clients and stuff, cannot do so.
Below I have some (extremely simplified) example code. Obviously that code wont compile because the code for the custom exceptions is missing. Also in real life the code is spread over three assemblies. This means, that I cannot wrap the exception handler around those framework methods which will throw InternalFrameworkException() only. I can only wrap it around the whole SomeMethod(). As I wrote, this is an extremely simplified example.
Is there any way to handle the RealException()s but continue the InternalFrameworkException()s without using PostSharp as mentioned here? Note that this is not about letting the InternalFrameworkException() fall through, but they should actually not break out of the try{} block at all.
namespace ExceptionTest
{
using System;
internal class Program
{
private static void Main(string[] args)
{
try
{
SomeMethod();
}
catch (InternalFrameworkException exception)
{
// Do not actually catch it - but also dont break the
// execution of "SomeMethod()".
// Actually I never want to end up here...
}
catch (RealException exception)
{
// Break the execution of SomeMethod() as usual.
throw;
}
catch (Exception exception)
{
// Again, break the execution of SomeMethod() as usual.
throw;
}
finally
{
// Clean up.
}
}
#region == Method is actually contained in another assembly referencing this assembly ===
private static void SomeMethod()
{
// Should break if uncommented.
// MethodThrowingProperException();
// Should not break.
MethodThrowingInternalExceptionOrRatherContinuableError();
// Should execute, even if previously an internal framework error happened.
MethodNotThrowingException();
}
#endregion
#region ===== Framework methods, they are contained in a foreign dll =====
private static void MethodThrowingProperException()
{
// Something happened which should break execution of the
// application using the framework
throw new RealException();
}
private static void MethodThrowingInternalExceptionOrRatherContinuableError()
{
// Perform some stuff which might lead to a resumable error,
// or rather an error which should not break the continuation
// of the application. I.e. initializing a value which is
// already initialized. The point is to tell the application using
// this framework that the value is already initialized, but
// as this wont influence the execution at all. So its rather
// a notification.
throw new InternalFrameworkException();
}
private static void MethodNotThrowingException()
{
// Well, just do some stuff.
}
#endregion
}
}
Edit: I did try the example in the post I already linked above, and it works like a charm ... when using it in SomeMethod() only. I could theoretically implement this as I am wrapping all the methods that are called in SomeMethod() before exposing them to the final assembly, but I dislike this approach, because it will give my code unnessessary complexity.
When an exception is thrown, the execution flow is broken. You can catch the exception or not but you cannot "continue" after the exception is thrown.
You can split your logic into parts and continue to the next part when one throws an exception, though.
I'm not sure of a way apart from an AOP approach in this case. Given that you are unable to change SomeMethod() or any of the methods it calls, you will need to look at adorning the called methods like MethodThrowingInternalExceptionOrRatherContinuableError() with an aspect that catches the 'continuable' exceptions.
The aspect would effectively wrap the method call in a try{...} catch(InternalFrameworkException) (or similar catchable exception) block.
As you have already noted, you are unable to drop back into a method once it has thrown an exception, even if the caller catches the exception in a catch() block, so you need to inject into the methods you are calling, which an AOP framework like PostSharp will allow you to do.
I have solved similar problem by wrapping the calls to InternalFrameworkMethod() in try-catch(InternalFrameworkException) blocks and calling it somethig like InternalFrameworkMethodSafe() and then in SomeMethod call the treated InternalFrameworkMethodSafe().
void InternalFrameworkMethodSafe()
{
try
{
InternalFrameworkMethod();
}
catch(InternalFrameworkException e)
{
Trace.Write("error in internal method" + e);
}
}
void SomeMethod()
{
...
InternalFrameworkMethodSafe();
...
}
It may not work in your case if the internal framework is in wrong state and not able to continue.
The thing is that SQL Server sometimes chooses a session as its deadlock victim when 2 processes lock each other out. The one process does an update and the other just a read. During read SQL Server creates so called 'shared locks' which does not block other reader but does block updaters. So far the only way to solve this is to reprocess the victimized thread.
Now this is happening in a web application and I would like to have a mechanism that can do the reprocessing (let's say with a maximum of 5 times) when needed.
I've looked at the IHttpModule which has a BeginRequest() and EndRequest() event being called (amongst other events) but that does not give me the ability to reprocess the request.
In fact what I need is something that forces itself between the http handler and the process being called.
I could write something like this:
int maxtries = 5;
while(maxtries > 0)
{
try
{
using(var scope = Session.OpenTransaction())
{
// process
scope.Complete(); // commit
return result;
}
}
catch(DeadlockException dlex)
{
maxtries--;
}
catch(Exception ex)
{
throw;
}
}
but I would have to write that for all requests which is tedious and error prone. I would be nice if I could just configure a kind of reprocessing handler via the Web.Config that is automatically called and does the processing deadlock reprocessing for me.
If your getting deadlocks you've got something wrong in your DB layer. You missing indices or something similar, or you are doing out of sequence updates within transactions that are locking dependant entities.
Regardless using HTTP as a mechanism to handle this error is not the way to go.
If you truly need to retry a deadlock, then you should wrap the attempt in your own function and retry almost exactly as you describe above.
BUT I would strongly suggest that you identify the cause of the deadlock and resolve it.
Hope that does not sound too dismissive of your problem, but fix the cause of the problem not the symptoms.
Since you're using MVC and assuming it is safe to rerun your entire action on DB failure, you can simply write a common base controller class from which all of your controllers will inherit (if you already don't have one), and in it override OnActionExecuting and trap specific exception(s) and retry. This way you'll have the code only in one place, but, again, assuming it is safe to rerun the entire action in such case.
Example:
public abstract class MyBaseController : Controller
{
protected override void OnActionExecuting(
ActionExecutingContext filterContext
)
{
int maxtries = 5;
while(maxtries > 0)
{
try
{
return base.OnActionExecuting(filtercontext);
}
catch(DeadlockException dlex)
{
maxtries--;
}
catch(Exception ex)
{
throw;
}
}
throw new Exception("Persistent DB locking - max retries reached.");
}
}
... and then simply update every relevant controller to inherit from this controller (again, if you don't already have a common controller).
EDIT: B/w, Bigtoe's answer is correct - deadlock is the cause and should be dealt with accordingly. The above solution is really a workaround if DB layer cannot be reliably fixed. The first attempt should be on reviewing and (re-)structuring queries so as to avoid deadlock in the first place. Only if that is not practical should the above workaround be employed.