I think I might have my Unit of Work set up wrong in my architecture. Here is what I currently have (indented to show order):
HttpRequest.Begin()
UnitOfWork.Begin()
Session.BeginTransaction(System.Data.IsolationLevel.ReadCommitted);
Here, I call various services to perform crud using NHibernate. When I want to make a change to the database (update/save), I call this code:
using (var transaction = unitOfWork.Session.BeginTransaction())
{
try
{
// These are just generics
ret = (Key)unitOfWork.Session.Save(entity);
transaction.Commit();
rb.unitOfWork.Session.Clear();
}
catch
{
transaction.Rollback();
rb.unitOfWork.Session.Clear();
rb.unitOfWork.DiscardSession();
throw;
}
}
When the HttpRequest is over, I perform these steps:
UnitOfWork.Commit()
Transaction.Commit() // This is my sessions transaction from the begin above
I am running into issues with being able to rollback large batch processes. Because I am committing my transactions in my CRUD layer, as seen above, my transaction is no longer active and when I try to rollback in my UnitOfWork, it does nothing because of the transaction already being committed. The reason I'm committing my code in my CRUD layer is so I can persist my data as quickly as possible without locking the database for too long.
What is the best course of action to take with a situation like the one above? Do I just make special CRUD operation that don't commit for batch jobs and just handle the commit at the end of my job, or is my logic just flawed with my UnitOfWork and Session Per Request? Any suggestions?
You've discovered the reason why the session-per-request pattern is so popular and the problems that can stem from micro-managing your unit of work.
Typically with each web request, everything that needs to be done within that request can be thought of as one unit of work so it stands to reason that you should only have one unit of work and one NHibernate session open during that single web request.
Also, I think you may be a bit confused about how NHibernate works due to this sentence in your question: "The reason I'm committing my code in my CRUD layer is so I can persist my data as quickly as possible without locking the database for too long."
NHibernate is not going to be causing any locking in your database. Everytime you call ISession.Save(entity), as long as you do not call ISession.Flush() or ITransaction.Commit(), nothing will be written to the database rather it will be added to a queue of items to be inserted or updated in the database when the current transaction is committed at the end of the web request.
So your session per request should be setup like so:
void Application_BeginRequest()
{
// Start your unit of work, open a session and begin a transaction
}
// Do all of your work ( Read, insert, update, delete )
void Application_EndRequest()
{
try
{
// UnitOfWork.Current.Transaction.Commit();
}
catch( Exception e )
{
// UnitOfWork.Current.Transaction.Rollback();
}
}
Of course there are many ways to do this same thing but this is the basics of the session per request pattern --only one session for the entire web request.
Related
I'm using ninject to manage my session for a Web API/MVC application. The code is as follows:
Bind<ISession>().ToMethod(c => c.Kernel.Get<ISessionFactory>().OpenSession())
.InRequestScope()
.OnActivation(s => s.BeginTransaction())
.OnDeactivation((s) =>
{
try
{
s.Transaction.Commit();
}
catch (Exception e)
{
s.Transaction.Rollback();
}
s.Close();
s.Dispose();
});
}
The OnActivation code is called correctly - when the session is injected a transaction is begun. However when the request finishes, the ondeactivation is not called. Therefore I can query things from the database but not commit changes (unless I commit the transaction elsewhere).
I'm not really sure why the OnDeactivation isn't being called - am I missing something in my ninject setup?
Calling Commit during OnDeactivation is a really bad idea, because OnDeactivation will always be called, even if an exception is thrown from within the business layer. In case of an error, you clearly don't want to commit the transaction.
You should consider committing on a different level. This q/a talks about this in more detail and shows how to solve this problem.
Also note that your code is overly verbose. If you call Dispose, you don't have to call Close and if you call Dispose on an uncommitted transaction, the transaction is automatically rolled back. You can even pull the plug, the database will automatically rollback an uncommitted transaction. In other words, you can easily simplify your code to the following:
.OnDeactivation((s) =>
{
try
{
s.Transaction.Commit();
}
finally
{
s.Dispose();
}
});
You can even remove the Dispose when you make use of the OnePerRequestHttpModule as described here. This reduces the code further to:
.OnDeactivation(s => s.Transaction.Commit());
But again, OnDeactivation is absolutely the wrong place to commit.
I've been looking into transactions for two days now, perhaps I'm missing something obvious after taking in so much information. The goal here is to block simultaneous request. If condition is true, data is inserted, after which condition will be false. Simultaneous requests will both check condition before data could be inserted and then will both try to insert data.
public async Task<ActionResult> Foo(Guid ID)
{
Debug.WriteLine("entering transaction scope");
using (var transaction = new TransactionScope(
TransactionScopeOption.Required,
new TransactionOptions() { IsolationLevel = IsolationLevel.Serializable },
TransactionScopeAsyncFlowOption.Enabled
))
{
Debug.WriteLine("entered transaction scope");
var context = new DbContext();
Debug.WriteLine("querying");
var foo = context.Foos.FirstOrDefault(/* condition */);
Debug.WriteLine("done querying");
context.Foos.Add(new Foo());
/* async work here */
Debug.WriteLine("saving");
context.SaveChanges();
Debug.WriteLine("saved");
Debug.WriteLine("exiting transaction scope");
transaction.Complete();
Debug.WriteLine("exited transaction scope");
return View();
}
}
This is the debug output when executing two requests at once with Fiddler:
entering transaction scope
entered transaction scope
querying
done querying
entering transaction scope
entered transaction scope
querying
done querying
saving
saving
saved
A first chance exception of type 'System.Data.SqlClient.SqlException' occurred in System.Data.dll
exiting transaction scope
exited transaction scope
This is my understanding of how the code is supposed to work:
A transaction with serialisable isolation level is required, to prevent phantom reads.
I cannot use DbContext.Database.Connection.BeginTransaction, when executing a query an error is thrown:
ExecuteReader requires the command to have a transaction when the connection assigned to the command is in a pending local transaction.
The Transaction property of the command has not been initialized.
TransactionScope normally causes causes problems when using task-based async.
http://entityframework.codeplex.com/discussions/429215
However, .NET 4.5.1 adds additional constructors to deal with this.
How to dispose TransactionScope in cancelable async/await?
EF 6 has improved transaction support, but I'm still on EF 5.
http://msdn.microsoft.com/en-us/data/dn456843.aspx
So obviously it's not working like I want it to. Is it possible to achieve my goal using TransactionScope? Or will I have to do an upgrade to EF 6?
Serialization does not solve the old 'check then insert' problem. Two serializable transactions can concurrently evaluate the condition, conclude that they have to insert, then both attempt to insert only for one to fail and one to succeed. This is because all reads are compatible with each other under serializable isolation (in fact they are compatible under all isolation levels).
There are many schools of thought how to solve this problem. Some recommend using MERGE. Some recommend using a lock hint in the check query to acquire an X or U lock instead. Personally I recommend always INSERT and gracefully recover the duplicate key violation. Another approach that does work is using explicit app locks.
EF or System.Transactions really add only noise tot he question. This is fundamentally a back-end SQL problem. As for the problem of how to flow a transaction scope between threads see Get TransactionScope to work with async / await (obviously, you already know this, from reading the OP... I didn't register in first read). You will need this to get your async code to enlist in the proper context, but the blocking/locking is fundamental back end problem, still.
Using a try-catch structure i'm trying to figure what to do if an exception is caught in any point of the transaction. Below one sample of code:
try
{
DbContext.ExecuteSqlCommand("BEGIN TRANSACTION"); //Line 1
DBContext.ExecuteSqlCommand("Some Insertion/Deletion Goes Here"); //Line 2
DbContext.ExecuteSqlCommand("COMMIT"); //Line 3
}
catch(Exception)
{
}
If the expection was caught executing 'Line 1' nothing must be done besides alerting the error. If it was caught executing the second line i don't know if i need to try to rollback the transaction that was sucessfully opened and the same occurs in case something went wrong with the third line.
Should i just send a rollback anyway? Or send all the commands straight to the bank in a single method call?
Inside the try-catch there's a loop performing many transactions like the one in the sample (and i need lots of small transactions instead of just a big one so i can reuse the SQL's '_log' file properly and avoid it to grow unnecessarily).
If any of the transactions go wrong i'll just need to delete them all and inform what happen't, but i can't turn that into one big transaction and just use rollback otherwise it will make the log file grow up to 40GB.
Think this will help:
using (var ctx = new MyDbContext())
{
// begin a transaction in EF – note: this returns a DbContextTransaction object
// and will open the underlying database connection if necessary
using (var dbCtxTxn = ctx.Database.BeginTransaction())
{
try
{
// use DbContext as normal - query, update, call SaveChanges() etc. E.g.:
ctx.Database.ExecuteSqlCommand(
#"UPDATE MyEntity SET Processed = ‘Done’ "
+ "WHERE LastUpdated < ‘2013-03-05T16:43:00’");
var myNewEntity = new MyEntity() { Text = #"My New Entity" };
ctx.MyEntities.Add(myNewEntity);
ctx.SaveChanges();
dbCtxTxn.Commit();
}
catch (Exception e)
{
dbCtxTxn.Rollback();
}
} // if DbContextTransaction opened the connection then it will close it here
}
taken from: https://entityframework.codeplex.com/wikipage?title=Improved%20Transaction%20Support
Basically the idea of it is your transaction becomes part of the using block, and within that you have a try/catch with the actual sql. If anything fails within the try/catch, it will be rolled back
As of Entity Framework 6, ExecuteSqlCommand is wrapped with its own transaction as explained here: http://msdn.microsoft.com/en-gb/data/dn456843.aspx
Unless you explicitly need to roll multiple sql commands into a single transaction, there is no need to explicitly begin a new transaction scope.
With respect to transaction log growth and assuming you are targeting Sql Server then setting the transaction log operation to simple will ensure the log gets recycled between checkpoints.
Obviously if the transaction log history is not being maintained across the entire import, there is no implicit mechanism to rollback all the data in case of failure. Keeping it simple, I would probably just add a 'created' datetime field to the table and delete from the table based on a filter to the created field if I needed to delete all rows in case of error.
The thing is that SQL Server sometimes chooses a session as its deadlock victim when 2 processes lock each other out. The one process does an update and the other just a read. During read SQL Server creates so called 'shared locks' which does not block other reader but does block updaters. So far the only way to solve this is to reprocess the victimized thread.
Now this is happening in a web application and I would like to have a mechanism that can do the reprocessing (let's say with a maximum of 5 times) when needed.
I've looked at the IHttpModule which has a BeginRequest() and EndRequest() event being called (amongst other events) but that does not give me the ability to reprocess the request.
In fact what I need is something that forces itself between the http handler and the process being called.
I could write something like this:
int maxtries = 5;
while(maxtries > 0)
{
try
{
using(var scope = Session.OpenTransaction())
{
// process
scope.Complete(); // commit
return result;
}
}
catch(DeadlockException dlex)
{
maxtries--;
}
catch(Exception ex)
{
throw;
}
}
but I would have to write that for all requests which is tedious and error prone. I would be nice if I could just configure a kind of reprocessing handler via the Web.Config that is automatically called and does the processing deadlock reprocessing for me.
If your getting deadlocks you've got something wrong in your DB layer. You missing indices or something similar, or you are doing out of sequence updates within transactions that are locking dependant entities.
Regardless using HTTP as a mechanism to handle this error is not the way to go.
If you truly need to retry a deadlock, then you should wrap the attempt in your own function and retry almost exactly as you describe above.
BUT I would strongly suggest that you identify the cause of the deadlock and resolve it.
Hope that does not sound too dismissive of your problem, but fix the cause of the problem not the symptoms.
Since you're using MVC and assuming it is safe to rerun your entire action on DB failure, you can simply write a common base controller class from which all of your controllers will inherit (if you already don't have one), and in it override OnActionExecuting and trap specific exception(s) and retry. This way you'll have the code only in one place, but, again, assuming it is safe to rerun the entire action in such case.
Example:
public abstract class MyBaseController : Controller
{
protected override void OnActionExecuting(
ActionExecutingContext filterContext
)
{
int maxtries = 5;
while(maxtries > 0)
{
try
{
return base.OnActionExecuting(filtercontext);
}
catch(DeadlockException dlex)
{
maxtries--;
}
catch(Exception ex)
{
throw;
}
}
throw new Exception("Persistent DB locking - max retries reached.");
}
}
... and then simply update every relevant controller to inherit from this controller (again, if you don't already have a common controller).
EDIT: B/w, Bigtoe's answer is correct - deadlock is the cause and should be dealt with accordingly. The above solution is really a workaround if DB layer cannot be reliably fixed. The first attempt should be on reviewing and (re-)structuring queries so as to avoid deadlock in the first place. Only if that is not practical should the above workaround be employed.
Is there a way to know in a wcf operation that a transaction has committed?
Ok, second attempt into being more specific.
I got a WCF service with an Operation with Transaction flow allow.
Now when a client call my wcf service it can have a transaction. But my service is also interested in the fact that the transaction on the client has succeeded. Because on my wcf service level, if everything went well. It has other things to do, but only if all transactions has been committed....
Is there like an event I can subscribe to or something?
It depends on the service itself and how you are handling transactions. If you are engaging in transactions in WCF through WS-Transaction then if the call to the client succeeds without exception, you can assume the transaction took place.
However, if this is in the context of another transaction, then you can't be sure if the transaction went through until the containing transaction is completed.
Even if you are using the TransactionScope class, if you have the service enabled to use transactions, you still have to take into account the encompassing transaction (if there is one).
You will have to provide more information about where the transaction is in relation to the call in order for a more complete answer.
Try using the operation behavior attribute above, in your operation that allows TransactionFlow:
[OperationBehavior(TransactionScopeRequired=true)]
If a transaction flows from the client, then the service will use it.
bool isTransactionComplete = true;
try
{
using (TransactionScope trScope = new TransactionScope(TransactionScopeOption.Required))
{
//some work
trScope.Complete();
}
}
catch (TransactionAbortedException e)
{
//Transaction holder got exception from some service
//and canceled transaction
isTransactionComplete = false;
}
catch//other exception
{
isTransactionComplete = false;
throw;
}
if (isTransactionComplete)
{
//Success
}
As casperOne wrote it depends on the settings. But you should be aware of complex transactions like
1) session service and simultaneous transactions for one service instance
2) transaction inside transaction