I'm using ninject to manage my session for a Web API/MVC application. The code is as follows:
Bind<ISession>().ToMethod(c => c.Kernel.Get<ISessionFactory>().OpenSession())
.InRequestScope()
.OnActivation(s => s.BeginTransaction())
.OnDeactivation((s) =>
{
try
{
s.Transaction.Commit();
}
catch (Exception e)
{
s.Transaction.Rollback();
}
s.Close();
s.Dispose();
});
}
The OnActivation code is called correctly - when the session is injected a transaction is begun. However when the request finishes, the ondeactivation is not called. Therefore I can query things from the database but not commit changes (unless I commit the transaction elsewhere).
I'm not really sure why the OnDeactivation isn't being called - am I missing something in my ninject setup?
Calling Commit during OnDeactivation is a really bad idea, because OnDeactivation will always be called, even if an exception is thrown from within the business layer. In case of an error, you clearly don't want to commit the transaction.
You should consider committing on a different level. This q/a talks about this in more detail and shows how to solve this problem.
Also note that your code is overly verbose. If you call Dispose, you don't have to call Close and if you call Dispose on an uncommitted transaction, the transaction is automatically rolled back. You can even pull the plug, the database will automatically rollback an uncommitted transaction. In other words, you can easily simplify your code to the following:
.OnDeactivation((s) =>
{
try
{
s.Transaction.Commit();
}
finally
{
s.Dispose();
}
});
You can even remove the Dispose when you make use of the OnePerRequestHttpModule as described here. This reduces the code further to:
.OnDeactivation(s => s.Transaction.Commit());
But again, OnDeactivation is absolutely the wrong place to commit.
Related
In our code base, we use TransactionScope extensively to manage our transactions. We have code that could look like this in one part of our codebase:
// options declared elsewhere
using var transactionScope = new TransactionScope(TransactionScopeOption.Required, transactionScopeOptions, TransactionScopeAsyncFlowOption.Enabled);
await _repository.DeleteAll(cancellationToken);
// do more stuff, that might trigger a call to SaveChangesAsync somewhere
transactionScope.Complete()
Then, in our repository implementation, we may have something that looks like this:
public async Task DeleteAll(CancellationToken cancellationToken)
{
// This may not even be necessary
if (_dbContext.Database.GetDbConnection().State != ConnectionState.Open)
{
await _dbContext.Database.OpenConnectionAsync(cancellationToken);
}
await _dbContext.Database.ExecuteSqlRawAsync("DELETE FROM ThatTable", cancellationToken);
}
The documentation of ExecuteSqlRawAsync states that no transaction is started by that method. This leads me to my question: what is the proper way to start a transaction and have it enlisted in the transaction scope so that the call to Complete will commit this transaction along with the other work we have EF do?
As I understand, your goal is to run both DeleteAll (which uses ExecuteSqlRawAsync) and potential following SaveChangesAsync in the same transaction. If so - your code already achieves that.
Yes, ExecuteSqlRawAsync does not start a separate transaction, but you do not need another transaction, you are already inside a transaction, because you are inside TransactionScope. SqlClient (or whatever other provider for EF you use) will notice there is abmient Transaction.Current when connection is opened and will start a transaction. Both ExecuteSqlRawAsync and SaveChangesAsync will run inside that transaction and complete or rollback together (I verified it to be sure).
The comment about "doesn't start transaction" is more for situations like:
ExecuteSqlRawAsync("delete from sometable where id = 1;delete from sometable where id = 2;");
Where you indeed might want to run your sql inside a transaction (assuming one doesn't already exists), and so docs warn you that it will not do that for you.
I believe the best approach would be to do as follows (to start a transaction and stay within the transaction scope so that the call to complete will commit it):
using (var scope = new TransactionScope(...))
{
...
scope.Complete();
}
The transaction would begin as soon as you go within the brackets.
I need to know if my transaction scope was successful or not. As in if the records were able to be saved in the Database or not.
Note: I am having this scope in the Service layer, and I do not wish to include a Try-Catch block.
bool txExecuted;
using (var tx = new TransactionScope())
{
//code
// 1 SAVING RECORDS IN DB
// 2 SAVING RECORDS IN DB
tx.Complete();
txExecuted = true;
}
if (txExecuted ) {
// SAVED SUCCESSFULLY
} else {
// NOT SAVED. FAILED
}
The commented code will be doing updates, and will probably be implemented using ExecuteNonQuery() - this returns an int of the number of rows affected. Keep track of all the return values to know how many rows were affected.
The transaction as a whole will either succeed or experience an exception when it completes. If no exception is encountered, the transaction was successful. If an exception occurs, some part of the transaction failed; none of it took place
By considering these two facts (records affected count, transaction exception or no) you can know if the save worked and how many rows were affected
I didn't quite understand the purpose of txExecuted- if an exception occurs it will never be set and the if will never be considered. The only code that will thus run is the stuff inside if(true). I don't see how you can decide to not use a try/catch and hope to do anything useful with a system that is geared to throw an exception if something goes wrong; you saying you don't want to catch exceptions isn't going to stop them happening and affecting the control flow of your program
To be clear, calling the Complete() method is only an indication that all operations within the scope are completed successfully.
However, you should also note that calling this method does not
guarantee a commit of the transaction. It is merely a way of informing
the transaction manager of your status. After calling this method, you
can no longer access the ambient transaction via the Current property,
and trying to do so results in an exception being thrown.
The actual work of commit between the resources manager happens at the
End Using statement if the TransactionScope object created the
transaction.
Since you are using ADO.NET, ExecuteNonQuery will return the number of rows affected. You can do a database lookup after the commit and outside of the using block.
In my opinion, its a mistake not to have a try/catch. You want to catch the TransactionAbortedException log the exception.
try
{
using (var scope = new TransactionScope())
{
using (var conn = new SqlConnection("connection string"))
{
}
// The Complete method commits the transaction. If an exception has been thrown,
// Complete is not called and the transaction is rolled back.
scope.Complete();
}
}
catch (TransactionAbortedException ex)
{
// log
}
The thing is that SQL Server sometimes chooses a session as its deadlock victim when 2 processes lock each other out. The one process does an update and the other just a read. During read SQL Server creates so called 'shared locks' which does not block other reader but does block updaters. So far the only way to solve this is to reprocess the victimized thread.
Now this is happening in a web application and I would like to have a mechanism that can do the reprocessing (let's say with a maximum of 5 times) when needed.
I've looked at the IHttpModule which has a BeginRequest() and EndRequest() event being called (amongst other events) but that does not give me the ability to reprocess the request.
In fact what I need is something that forces itself between the http handler and the process being called.
I could write something like this:
int maxtries = 5;
while(maxtries > 0)
{
try
{
using(var scope = Session.OpenTransaction())
{
// process
scope.Complete(); // commit
return result;
}
}
catch(DeadlockException dlex)
{
maxtries--;
}
catch(Exception ex)
{
throw;
}
}
but I would have to write that for all requests which is tedious and error prone. I would be nice if I could just configure a kind of reprocessing handler via the Web.Config that is automatically called and does the processing deadlock reprocessing for me.
If your getting deadlocks you've got something wrong in your DB layer. You missing indices or something similar, or you are doing out of sequence updates within transactions that are locking dependant entities.
Regardless using HTTP as a mechanism to handle this error is not the way to go.
If you truly need to retry a deadlock, then you should wrap the attempt in your own function and retry almost exactly as you describe above.
BUT I would strongly suggest that you identify the cause of the deadlock and resolve it.
Hope that does not sound too dismissive of your problem, but fix the cause of the problem not the symptoms.
Since you're using MVC and assuming it is safe to rerun your entire action on DB failure, you can simply write a common base controller class from which all of your controllers will inherit (if you already don't have one), and in it override OnActionExecuting and trap specific exception(s) and retry. This way you'll have the code only in one place, but, again, assuming it is safe to rerun the entire action in such case.
Example:
public abstract class MyBaseController : Controller
{
protected override void OnActionExecuting(
ActionExecutingContext filterContext
)
{
int maxtries = 5;
while(maxtries > 0)
{
try
{
return base.OnActionExecuting(filtercontext);
}
catch(DeadlockException dlex)
{
maxtries--;
}
catch(Exception ex)
{
throw;
}
}
throw new Exception("Persistent DB locking - max retries reached.");
}
}
... and then simply update every relevant controller to inherit from this controller (again, if you don't already have a common controller).
EDIT: B/w, Bigtoe's answer is correct - deadlock is the cause and should be dealt with accordingly. The above solution is really a workaround if DB layer cannot be reliably fixed. The first attempt should be on reviewing and (re-)structuring queries so as to avoid deadlock in the first place. Only if that is not practical should the above workaround be employed.
I think I might have my Unit of Work set up wrong in my architecture. Here is what I currently have (indented to show order):
HttpRequest.Begin()
UnitOfWork.Begin()
Session.BeginTransaction(System.Data.IsolationLevel.ReadCommitted);
Here, I call various services to perform crud using NHibernate. When I want to make a change to the database (update/save), I call this code:
using (var transaction = unitOfWork.Session.BeginTransaction())
{
try
{
// These are just generics
ret = (Key)unitOfWork.Session.Save(entity);
transaction.Commit();
rb.unitOfWork.Session.Clear();
}
catch
{
transaction.Rollback();
rb.unitOfWork.Session.Clear();
rb.unitOfWork.DiscardSession();
throw;
}
}
When the HttpRequest is over, I perform these steps:
UnitOfWork.Commit()
Transaction.Commit() // This is my sessions transaction from the begin above
I am running into issues with being able to rollback large batch processes. Because I am committing my transactions in my CRUD layer, as seen above, my transaction is no longer active and when I try to rollback in my UnitOfWork, it does nothing because of the transaction already being committed. The reason I'm committing my code in my CRUD layer is so I can persist my data as quickly as possible without locking the database for too long.
What is the best course of action to take with a situation like the one above? Do I just make special CRUD operation that don't commit for batch jobs and just handle the commit at the end of my job, or is my logic just flawed with my UnitOfWork and Session Per Request? Any suggestions?
You've discovered the reason why the session-per-request pattern is so popular and the problems that can stem from micro-managing your unit of work.
Typically with each web request, everything that needs to be done within that request can be thought of as one unit of work so it stands to reason that you should only have one unit of work and one NHibernate session open during that single web request.
Also, I think you may be a bit confused about how NHibernate works due to this sentence in your question: "The reason I'm committing my code in my CRUD layer is so I can persist my data as quickly as possible without locking the database for too long."
NHibernate is not going to be causing any locking in your database. Everytime you call ISession.Save(entity), as long as you do not call ISession.Flush() or ITransaction.Commit(), nothing will be written to the database rather it will be added to a queue of items to be inserted or updated in the database when the current transaction is committed at the end of the web request.
So your session per request should be setup like so:
void Application_BeginRequest()
{
// Start your unit of work, open a session and begin a transaction
}
// Do all of your work ( Read, insert, update, delete )
void Application_EndRequest()
{
try
{
// UnitOfWork.Current.Transaction.Commit();
}
catch( Exception e )
{
// UnitOfWork.Current.Transaction.Rollback();
}
}
Of course there are many ways to do this same thing but this is the basics of the session per request pattern --only one session for the entire web request.
Is there a way to know in a wcf operation that a transaction has committed?
Ok, second attempt into being more specific.
I got a WCF service with an Operation with Transaction flow allow.
Now when a client call my wcf service it can have a transaction. But my service is also interested in the fact that the transaction on the client has succeeded. Because on my wcf service level, if everything went well. It has other things to do, but only if all transactions has been committed....
Is there like an event I can subscribe to or something?
It depends on the service itself and how you are handling transactions. If you are engaging in transactions in WCF through WS-Transaction then if the call to the client succeeds without exception, you can assume the transaction took place.
However, if this is in the context of another transaction, then you can't be sure if the transaction went through until the containing transaction is completed.
Even if you are using the TransactionScope class, if you have the service enabled to use transactions, you still have to take into account the encompassing transaction (if there is one).
You will have to provide more information about where the transaction is in relation to the call in order for a more complete answer.
Try using the operation behavior attribute above, in your operation that allows TransactionFlow:
[OperationBehavior(TransactionScopeRequired=true)]
If a transaction flows from the client, then the service will use it.
bool isTransactionComplete = true;
try
{
using (TransactionScope trScope = new TransactionScope(TransactionScopeOption.Required))
{
//some work
trScope.Complete();
}
}
catch (TransactionAbortedException e)
{
//Transaction holder got exception from some service
//and canceled transaction
isTransactionComplete = false;
}
catch//other exception
{
isTransactionComplete = false;
throw;
}
if (isTransactionComplete)
{
//Success
}
As casperOne wrote it depends on the settings. But you should be aware of complex transactions like
1) session service and simultaneous transactions for one service instance
2) transaction inside transaction