I have services been called through the 'Guardian' method, that has TransactionScope opened for each request and complete that transaction if everything is fine:
void ExecuteWorker(...)
{
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.RequiresNew))
{
...CallLogicMethods...
scope.Complete();
}
}
One of the methods interacts with 'External' service, and in case if that interaction fails all my transaction fails also. As a result, I don't save required data (been calculated before request to external service.
void DoLogic1(...)
{
CalculateSomeData(...);
SaveCalculatedData(...);
DoRequestToExternalService(...);
}
What is the best way to resolve that issue?
Application is written using C#, .NET 4.0, MS SQL 2008.
Myself I see two solutions
Using try/catch:
void DoLogic11(...)
{
CalculateSomeData(...);
SaveCalculatedData(...);
try
{
DoRequestToExternalService(...);
}
catch(Exception exc)
{
LogError(...);
}
}
The lack of this approach is that I'm hiding exception from the caller. But I would like to pass error outside as an exception (to be logged, etc).
Using 'Nested transcation', but I not sure how that works.
Here is my vision it should be:
void DoLogic12(...)
{
using (TransactionScope scopeNested = new TransactionScope(TransactionScopeOption.Suppress))
{
CalculateSomeData(...);
SaveCalculatedData(...);
scopeNested.Complete()
}
DoRequestToExternalService(...);
}
I've implemented that, tried to use, but it seems that nested transcation is committed only in case when external is committed also.
Please advise.
I am not sure I understood it correctly. Can you put all your logic methods in one try-catch? Each call is a separate transaction using TransactionScopeOption.RequiresNew
void DoLogic1(...)
{
try
{
CalculateSomeData(...);
SaveCalculatedData(...);
DoRequestToExternalService(...);
}
catch(Exception ex)
{
LogException(...)
throw;
}
}
But I would like to pass error outside
as an exception (to be logged, etc).
Can you use throw?
I've decided to change my 'ExecuteWorker' method to create transaction conditionally. Therefore I'm able to create transaction in the 'DoLogicX' method itself.
Related
I am developing an Asp.net MVC application, and I have created an error handling system that forces me to create the following code per function in my BLL.
try
{
...
_unitOfWork.Save(nameof(Function));
}
catch
{
if (rollbackTo != null)
{
_unitOfWork.RollbackToSave(rollbackTo);
}
else
{
_unitOfWork.Rollback();
}
throw;
}
this basically allows me to manage my transactions per request, and manage the transaction's error handling without my Controllers knowing about the actual way that this is being done - it only allows it to decide whether or not the transaction will continue (rollbackTo parameter)
What I am wondering is, is there a way for me not to have to write this piece of code over and over? I thought about just throwing an exception, and then handle it in my pipeline - but since I need to return a valueable response to the user, and not just a 500 code - this isn't really an option. I thought about maybe creating a base class that calls and abstract method - and implementing it per function - but that won't work either, since the parameters can change. Any Ideas?
Yes, this is fairly standard.
For example, in the base class
public void DoSomethingAndRollbackThenThrow(Action<IUnitOfWork> action)
{
try
{
...
action(_unitOfWork);
}
catch
{
if (rollbackTo != null)
{
_unitOfWork.RollbackToSave(rollbackTo);
}
else
{
_unitOfWork.Rollback();
}
throw;
}
}
And then you can call it from derived class like so
public void DoSomethingSpecific()
{
base.DoSomethingAndRollbackThenThrow(unitOfWork => {
_unitOfWork.Save(nameof(Function));
});
}
You can use an AOP(Aspect Oriented Programming) framework.
You can "weave" some functionalities to your methods with just implementing for one time and adding some attributes.
More about AOP:
https://en.wikipedia.org/wiki/Aspect-oriented_programming
An easy-to-use open source AOP Framework:
https://github.com/AntyaDev/KingAOP
There are also a bunch of alternatives (both commercial and open source). Google may give you good results about alternatives.
Background
We are trying to archive old user data to keep our most common tables smaller.
Issue
Normal EF code for removing records works for our custom tables. The AspNetUsers table is a different story. It appears that the way to do it is using _userManager.Delete or _userManager.DeleteAsync. These work without trying to do multiple db calls in one transaction. When I wrap this in a transactionScope, it times out. Here is an example:
public bool DeleteByMultipleIds(List<string> idsToRemove)
{
try
{
using (var scope = new TransactionScope())
{
foreach (var id in idsToRemove)
{
var user = _userManager.FindById(id);
//copy user data to archive table
_userManager.Delete(user);//causes timeout
}
scope.Complete();
}
return true;
}
catch (TransactionAbortedException e)
{
Logger.Publish(e);
return false;
}
catch (Exception e)
{
Logger.Publish(e);
return false;
}
}
Note that while the code is running and I call straight to the DB like:
DELETE
FROM ASPNETUSERS
WHERE Id = 'X'
It will also time out. This SQL works before the the C# code is executed. Therefore, it appears that more than 1 db hit seems to lock the table. How can I find the user(db hit #1) and delete the user (db hit #2) in one transaction?
For me, the problem involved the use of multiple separate DbContexts within the same transaction. The BeginTransaction() approach did not work.
Internally, UserManager.Delete() is calling an async method in a RunSync() wrapper. Therefore, using the TransactionScopeAsyncFlowOption.Enabled parameter for my TransactionScope did work:
using (var scope = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))
{
_myContext1.Delete(organisation);
_myContext2.Delete(orders);
_userManager.Delete(user);
scope.Complete();
}
Advice from microsoft is to use a different API when doing transactions with EF. This is due to the interactions between EF and the TransactionScope class. Implicitly transaction scope is forcing things up to serializable, which causes a deadlock.
Good description of an EF internal API is here: MSDN Link
For reference you may need to look into user manager if it exposes the datacontext and replace your Transaction scope with using(var dbContextTransaction = context.Database.BeginTransaction()) { //code }
Alternatively, looking at your scenario, you are actually quite safe in finding the user ID, then trying to delete it and then just catching an error if the user has been deleted in the fraction of a second between finding it and deleting it.
The thing is that SQL Server sometimes chooses a session as its deadlock victim when 2 processes lock each other out. The one process does an update and the other just a read. During read SQL Server creates so called 'shared locks' which does not block other reader but does block updaters. So far the only way to solve this is to reprocess the victimized thread.
Now this is happening in a web application and I would like to have a mechanism that can do the reprocessing (let's say with a maximum of 5 times) when needed.
I've looked at the IHttpModule which has a BeginRequest() and EndRequest() event being called (amongst other events) but that does not give me the ability to reprocess the request.
In fact what I need is something that forces itself between the http handler and the process being called.
I could write something like this:
int maxtries = 5;
while(maxtries > 0)
{
try
{
using(var scope = Session.OpenTransaction())
{
// process
scope.Complete(); // commit
return result;
}
}
catch(DeadlockException dlex)
{
maxtries--;
}
catch(Exception ex)
{
throw;
}
}
but I would have to write that for all requests which is tedious and error prone. I would be nice if I could just configure a kind of reprocessing handler via the Web.Config that is automatically called and does the processing deadlock reprocessing for me.
If your getting deadlocks you've got something wrong in your DB layer. You missing indices or something similar, or you are doing out of sequence updates within transactions that are locking dependant entities.
Regardless using HTTP as a mechanism to handle this error is not the way to go.
If you truly need to retry a deadlock, then you should wrap the attempt in your own function and retry almost exactly as you describe above.
BUT I would strongly suggest that you identify the cause of the deadlock and resolve it.
Hope that does not sound too dismissive of your problem, but fix the cause of the problem not the symptoms.
Since you're using MVC and assuming it is safe to rerun your entire action on DB failure, you can simply write a common base controller class from which all of your controllers will inherit (if you already don't have one), and in it override OnActionExecuting and trap specific exception(s) and retry. This way you'll have the code only in one place, but, again, assuming it is safe to rerun the entire action in such case.
Example:
public abstract class MyBaseController : Controller
{
protected override void OnActionExecuting(
ActionExecutingContext filterContext
)
{
int maxtries = 5;
while(maxtries > 0)
{
try
{
return base.OnActionExecuting(filtercontext);
}
catch(DeadlockException dlex)
{
maxtries--;
}
catch(Exception ex)
{
throw;
}
}
throw new Exception("Persistent DB locking - max retries reached.");
}
}
... and then simply update every relevant controller to inherit from this controller (again, if you don't already have a common controller).
EDIT: B/w, Bigtoe's answer is correct - deadlock is the cause and should be dealt with accordingly. The above solution is really a workaround if DB layer cannot be reliably fixed. The first attempt should be on reviewing and (re-)structuring queries so as to avoid deadlock in the first place. Only if that is not practical should the above workaround be employed.
We were trying to do some integrity checks on our database state for diagnostic reasons, so we wrapped our modification ORM queries in a TransactionScope coupled with a second query that ran diagnostics - something like this:
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required, _maxTimeout))
{
ORM.DeleteItem();
ORM.CheckIntegrity();
scope.Complete();
}
It's a hand-rolled ORM, and both those calls end up doing their bit in a nested transaction scope down at the bottom. In other words, when you dig down, DeleteItem() has
using (TransactionScope newScope = new TransactionScope(TransactionScopeOptions.Required, _maxTimeout)
{...}
and CheckIntegrity() also has the same.
For the most part it's been working fine, but I've run across an odd condition. When someone puts in some bad inputs to the query, the DeleteItem() call can throw an exception. That exception is completely caught and handled at a stack level below the wrapper. I believe that exception is also thrown before it gets to nesting the TransactionScope.
But when we get down to the nested scope creation in the CheckIntegrity() call, it throws a "Transaction was aborted error" from the CreateAbortingClone constructor. The inner exception is null.
Most every other mention of the CreateAbortingClone interaction has to do with DTC promotion (or failure thereof) and the inner exception reflects that.
I'm inferring that the abort exception on the CheckIntegrity() call is due to the fact that the DeleteItem() had thrown an exception - even though it was swallowed.
A) is that a correct inference? Is a TransactionScope sensitive to any exceptions thrown, handled or not?
B) is there any way to detect that before making the CheckIntegrity() call? I mean other than re-doing our ORM to let the exception percolate up or adding some other global flag?
Thanks
Mark
I only know how this works with EF(entity framework)
using (var context = new MyContext(this._connectionString))
{
using (var dbContextTransaction = context.Database.BeginTransaction())
{
}
}
Then the Transaction is linked to the context. I am not formilar on how your code makes that connection, but may be some fancy build in stuff.
Then it is best to wrap this in a try/catch
try
{
// do-stuff
context.SaveChanges();
//NB!!!!!!
//----------------------
dbContextTransaction.Commit();
}
catch (Exception ex)
{
dbContextTransaction.Rollback();
//log why it was rolled back
Logger.Error("Error during transaction,transaction rollback", ex);
}
so final code would look like
using (var context = new MyContext(this._connectionString))
{
using (var dbContextTransaction = context.Database.BeginTransaction())
{
try
{
// do-stuff //
context.SaveChanges();
///////////////////////
//if any exception happen, changes wont be saved unless Commit is called
//NB!!!!!!
//----------------------
dbContextTransaction.Commit();
}
catch (Exception ex)
{
dbContextTransaction.Rollback();
//log why it was rolled back
Logger.Error("Error during transaction,transaction rollback", ex);
}
}
}
I got a serviced component which looks something like this (not written by me):
[Transaction(TransactionOption.Required, Isolation = TransactionIsolationLevel.Serializable, Timeout = 120), EventTrackingEnabled(true)]
public class SomeComponent : ServicedComponent
{
public void DoSomething()
{
try
{
//some db operation
}
catch (Exception err)
{
ContextUtil.SetAbort();
throw;
}
}
Is the ContextUtil.SetAbort() really required? Won't the exception abort the transaction when the component is left?
Only if you want to manage the transaction manually.
Your component will vote automatically to abort (in case any exception is raised), or commit, if you decorate your operation with the [AutoComplete] attribute in this way:
[AutoComplete]
public void DoSomething()
EDIT:
For more info about this attribute, see MSDN here:
The transaction automatically calls SetComplete if the method call
returns normally. If the method call throws an exception, the
transaction is aborted.
Anyway if you are (in the rare case) that really need to manage the transaction manually, is really important that you don't leave your transactions in doubt. I'm missing in your code the ContextUtil.SetComplete(); that should be explicitly called.