Does anyone know of any cases when using a transaction scope the transaction is escalated to the DTC when multiple connections are NOT open.
I am aware that if I open multiple connections(no matter what connection string) within a transaction scope that the transaction will most likely be promoted to the DTC.
Knowing this I have gone to great lengths to make sure there is only ONE connection opened within my transactions.
However, I have a client where they are getting the exception
An error has occurred. Csla.DataPortalException: DataPortal.Update failed (The underlying provider failed on Open.) ---> Csla.Reflection.CallMethodException: EditableCategory.DataPortal_Update method call failed ---> System.Data.EntityException: The underlying provider failed on Open. ---> System.Transactions.TransactionManagerCommunicationException: Network access for Distributed Transaction Manager (MSDTC) has been disabled. Please enable DTC for network access in the security configuration for MSDTC using the Component Services Administrative tool. ---> System.Runtime.InteropServices.COMException: The transaction manager has disabled its support for remote/network transactions.
Again, I am pretty sure there is only one connection opened within the scope. Take a look.
protected override void DataPortal_Update()
{
using (System.Transactions.TransactionScope ts = new System.Transactions.TransactionScope(System.Transactions.TransactionScopeOption.Required, System.Transactions.TransactionManager.MaximumTimeout))
{
//get the dal manager he knows which dal implementation to use
using (var dalMgr = DataAccess.BusinessObjectsDalFactory.GetManager())
{
//get the category dal implementation
var ecDal = dalMgr.GetProvider<DataAccess.BusinessObjectDalInterfaces.ICategoryDAL>();
//assume all the data is good at this point so use bypassproperty checks
using (BypassPropertyChecks)
{
var catData = new Models.Category { CategoryId = CategoryId, CategoryName = CategoryName, LastChanged = TimeStamp };
ecDal.UpdateCategory(catData);
TimeStamp = catData.LastChanged;
}
}
ts.Complete();
}
base.DataPortal_Update();
}
public class DalManager : Core.Sebring.DataAccess.IBusinessObjectsDalManager {private static string _typeMask = typeof(DalManager).FullName.Replace("DalManager", #"{0}");
public T GetProvider<T>() where T : class
{
var typeName = string.Format(_typeMask, typeof(T).Name.Substring(1));
var type = Type.GetType(typeName);
if (type != null)
return Activator.CreateInstance(type) as T;
else
throw new NotImplementedException(typeName);
}
public Csla.Data.DbContextManager<DataContext> ConnectionManager { get; private set; }
public DalManager()
{
ConnectionManager = Csla.Data.DbContextManager<DataContext>.GetManager();
}
public void Dispose()
{
ConnectionManager.Dispose();
ConnectionManager = null;
}
public void UpdateDataBase()
{
DatabaseUpgrader.PerformUpgrade();
}
}
public void UpdateCategory(Models.Category catData)
{
if (catData == null) return;
using (var cntx = DbContextManager<DataContext>.GetManager())
{
var cat = cntx.DbContext.Set<Category>().FirstOrDefault(c => c.CategoryId == catData.CategoryId);
if (cat == null) return;
if (!cat.LastChanged.Matches(catData.LastChanged))
throw new ConcurrencyException(cat.GetType().ToString());
cat.CategoryName = catData.CategoryName;
//cntx.DbContext.ChangeTracker.DetectChanges();
cntx.DbContext.Entry<Category>(cat).State = System.Data.EntityState.Modified;
cntx.DbContext.SaveChanges();
catData.LastChanged = cat.LastChanged;
}
}
The code for DBContextManager is available, but in short it just makes certain there is only one DBContext, and hence one connection opened. Am I overlooking something? I guess its possible that something is up with DBConextManager, so I have posted in the CSLA forums as well(DBContextManager is a part of CSLA). But has anyone run into scenarios where they are sure one connection is opened within the transaction scope and the transaction is escalated to the DTC?
Of course I cannot reproduce the exception on my local dev machine OR any of our QA machines.
Any help is appreciated.
Thanks.
Entity Framework can randomly try to open a new connection when doing transactions with System.Transactions.TransactionScope
Try adding a finally statement and dispose your transaction, also call your dbContext and manually close the connection , this will lesser the ammount of times the transaction gets escalated but it might still happen:
finally
{
cntx.Database.Connection.Close();
transaction.Dispose();
}
Its a known "bug" you can find more here :
http://petermeinl.wordpress.com/2011/03/13/avoiding-unwanted-escalation-to-distributed-transactions/
Related
I have an old ObjectContext and few new DbContext in my project (i.e. BoundedContext for different purposes).
Some time I need to commit changes from few of them in one transactions. In some cases I need to persist data from ObjectContext and DbContext.
In EF 5.0 to avoid of MSDC I write some wraper
public class ContextUnitOfWork
{
List<IContext> ContextList;
public ContextUnitOfWork()
{
ContextList = new List<IContext>();
}
public void RegisterContext(IContext Context)
{
ContextList.Add(Context);
}
public bool IsDisposed
{
get
{
return ContextList.Any(x => x.IsDisposed);
}
}
public bool HasChangedEntities
{
get
{
return ContextList.Any(x => x.HasChangedEntities);
}
}
public void Commit()
{
bool HasDbContext = ContextList.OfType<System.Data.Entity.DbContext>().Any();
try
{
if (HasDbContext)
{
ContextList.ForEach(x =>
{
if (x is System.Data.Entity.DbContext)
{
(x as System.Data.Entity.DbContext).Database.Connection.Open();
}
else if (x is System.Data.Objects.ObjectContext)
{
((System.Data.Objects.ObjectContext)x).Connection.Open();
}
});
}
using (var scope = new System.Transactions.TransactionScope(System.Transactions.TransactionScopeOption.Required,
new System.Transactions.TransactionOptions { IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted }))
{
ContextList.ForEach(x => x.Commit());
scope.Complete();
}
}
catch (System.Data.UpdateException uex)
{
var ErrorList = uex.StateEntries.Select(x => x.Entity).ToList();
}
finally
{
if (HasDbContext)
{
ContextList.ForEach(x =>
{
if (x is System.Data.Entity.DbContext)
{
(x as System.Data.Entity.DbContext).Database.Connection.Close();
}
else if (x is System.Data.Objects.ObjectContext)
{
((System.Data.Objects.ObjectContext)x).Connection.Close();
}
});
};
}
}
}
But in EntityFramework 6.0.1 it doesn't work. ObjectContext commit successfully, but when DbContext call SaveChanges() an Exception of type EntityException with text
"The underlying provider failed on EnlistTransaction." And Inner Expection contains {"Network access for Distributed Transaction Manager (MSDTC) has been disabled. Please enable DTC for network access in the security configuration for MSDTC using the Component Services Administrative tool."}
Any Idea to commit contexts in one transaction and avoid MDTC exception?
You are attempting to run everything in a local transaction which is very tricky even with multiple contexts of the same type. The reason for this is that you cannot have multiple open connections with the same local transaction. And very often a new connection will be opened for the next context if the previous context is still alive. This will trigger a promotion of the local transaction to a distributed transaction.
My experience with EF is that it only re-uses the current connection when the connectionstring (the normal one inside the entityconnectionstring) is EXACTLY identical. If there is a single difference, the transaction will be promoted to a distributed transaction, which must be enabled by the system, which in your case, it is not.
Also, if you are already executing a query, and are still reading results from that query, that starting another query at the same time, will (of course) require another connection, and therefore, the local transaction will be promoted to a distributed transaction.
Can you check if the connection strings are identical? I would still be surprised if current connection is re-used though.
I am having an issue with using TransactionScope that has has two Entity Framework ObjectContexts initialized in it, one for database X and one for database Y on the same server.
The first context "cersCtx" creates a new "Contact" object. The second context "coreCtx" attempts to find an Account.
On the call on the second context to find the account
account = coreCtx.Accounts.SingleOrDefault(p => p.Email == email && !p.Voided);
I get the following error:
The MSDTC transaction manager was unable to pull the transaction from the source transaction manager due to communication problems. Possible causes are: a firewall is present and it doesn't have an exception for the MSDTC process, the two machines cannot find each other by their NetBIOS names, or the support for network transactions is not enabled for one of the two transaction managers. (Exception from HRESULT: 0x8004D02B)
The odd thing is, when I go into MSDTC, I am seeing there is transactions that are committed but most often they are Aborting.
Anyone have any ideas on how I can resolve this issue?
Here is an abbreivated sample of code that produces this issue.
public void CreateContact(string email)
{
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required, new TransactionOptions { IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted }))
{
using (CERSEntities cersCtx = new CERSEntities())
{
Contact contact = new Contact();
contact.Email = email;
cersCtx.Contacts.AddObject(contact);
cersCtx.SaveChanges();
}
CoreModel.Account account = null;
using (CoreModel.CoreEntities coreCtx = new CoreModel.CoreEntities())
{
account = coreCtx.Accounts.SingleOrDefault(p => p.Email == email && !p.Voided);
}
scope.Complete();
}
}
public void SomeMethod1()
{
using (TemplateEntities ctx = new TemplateEntities())
{
//do something in this ctx
}
}
public void SomeMethod2()
{
using (TemplateEntities ctx = new TemplateEntities())
{
//do something else in this ctx
}
}
public void SomeMethod()
{
using (TemplateEntities ctx = new TemplateEntities())
{
using (TransactionScope tran = new TransactionScope())
{
SomeMethod1();
SomeMethod2();
var itemToDelete= (from x in ctx.Xxx
where x.Id==1
select x).Single();
ctx.Xxx.DeleteObject(itemToDelete);
ctx.SaveChanges();
tran.Complete();
}
}
}
What happens in SomeMethod is executed in a transaction even if there are more contexts?
I am using POCO.
If you use TransactionScope with multiple ObjectContext instances the transaction will be promoted to distributed and whole operation (SomeMethod) will be handled still as atomic. But distributed transaction requires additional NT service and its dependecies. The service is called Microsoft Distributed Transaction Coordinator (MSDTC). This service has to run on all involved servers (application server and database server). In network scenario service requires some additional configuration. For communication RPC ports have to be opened in firewalls.
Ultimately the database doesn't know about data-contexts, so simply: the rules of transactions apply. Being a serializable transaction, things like read locks and key-range-locks will be issued and honoured. As always, there is a risk of complication from deadlocks, but ultimately it should work. Note that all the contexts involved should enlist as required.
Using the TransactionScope object to set up an implicit transaction that doesn't need to be passed across function calls is great! However, if a connection is opened whilst another is already open, the transaction coordinator silently escalates the transaction to be distributed (needing MSDTC service to be running and taking up much more resources and time).
So, this is fine:
using (var ts = new TransactionScope())
{
using (var c = DatabaseManager.GetOpenConnection())
{
// Do Work
}
using (var c = DatabaseManager.GetOpenConnection())
{
// Do more work in same transaction using different connection
}
ts.Complete();
}
But this escalates the transaction:
using (var ts = new TransactionScope())
{
using (var c = DatabaseManager.GetOpenConnection())
{
// Do Work
using (var nestedConnection = DatabaseManager.GetOpenConnection())
{
// Do more work in same transaction using different nested connection - escalated transaction to distributed
}
}
ts.Complete();
}
Is there a recommended practise to avoid escalating transactions in this way, whilst still using nested connections?
The best I can come up with at the moment is having a ThreadStatic connection and reusing that if Transaction.Current is set, like so:
public static class DatabaseManager
{
private const string _connectionString = "data source=.\\sql2008; initial catalog=test; integrated security=true";
[ThreadStatic]
private static SqlConnection _transactionConnection;
[ThreadStatic] private static int _connectionNesting;
private static SqlConnection GetTransactionConnection()
{
if (_transactionConnection == null)
{
Transaction.Current.TransactionCompleted += ((s, e) =>
{
_connectionNesting = 0;
if (_transactionConnection != null)
{
_transactionConnection.Dispose();
_transactionConnection = null;
}
});
_transactionConnection = new SqlConnection(_connectionString);
_transactionConnection.Disposed += ((s, e) =>
{
if (Transaction.Current != null)
{
_connectionNesting--;
if (_connectionNesting > 0)
{
// Since connection is nested and same as parent, need to keep it open as parent is not expecting it to be closed!
_transactionConnection.ConnectionString = _connectionString;
_transactionConnection.Open();
}
else
{
// Can forget transaction connection and spin up a new one next time one's asked for inside this transaction
_transactionConnection = null;
}
}
});
}
return _transactionConnection;
}
public static SqlConnection GetOpenConnection()
{
SqlConnection connection;
if (Transaction.Current != null)
{
connection = GetTransactionConnection();
_connectionNesting++;
}
else
{
connection = new SqlConnection(_connectionString);
}
if (connection.State != ConnectionState.Open)
{
connection.Open();
}
return connection;
}
}
Edit: So, if the answer is to reuse the same connection when it's nested inside a transactionscope, as the code above does, I wonder about the implications of disposing of this connection mid-transaction.
So far as I can see (using Reflector to examine code), the connection's settings (connection string etc.) are reset and the connection is closed. So (in theory), re-setting the connection string and opening the connection on subsequent calls should "reuse" the connection and prevent escalation (and my initial testing agrees with this).
It does seem a little hacky though... and I'm sure there must be a best-practise somewhere that states that one should not continue to use an object after it's been disposed of!
However, since I cannot subclass the sealed SqlConnection, and want to maintain my transaction-agnostic connection-pool-friendly methods, I struggle (but would be delighted) to see a better way.
Also, realised that I could force non-nested connections by throwing exception if application code attempts to open nested connection (which in most cases is unnecessary, in our codebase)
public static class DatabaseManager
{
private const string _connectionString = "data source=.\\sql2008; initial catalog=test; integrated security=true; enlist=true;Application Name='jimmy'";
[ThreadStatic]
private static bool _transactionHooked;
[ThreadStatic]
private static bool _openConnection;
public static SqlConnection GetOpenConnection()
{
var connection = new SqlConnection(_connectionString);
if (Transaction.Current != null)
{
if (_openConnection)
{
throw new ApplicationException("Nested connections in transaction not allowed");
}
_openConnection = true;
connection.Disposed += ((s, e) => _openConnection = false);
if (!_transactionHooked)
{
Transaction.Current.TransactionCompleted += ((s, e) =>
{
_openConnection = false;
_transactionHooked = false;
});
_transactionHooked = true;
}
}
connection.Open();
return connection;
}
}
Would still value a less hacky solution :)
One of the primary reasons for transaction escalation is when you have multiple (different) connections involved in a transaction. This almost always escalates to a distributed transaction. And it is indeed a pain.
This is why we make sure that all our transactions use a single connection object. There are several ways to do this. For the most part, we use the thread static object to store a connection object, and our classes that do the database persistance work, use the thread static connection object (which is shared of course). This prevents multiple connections objects from being used and has eliminated transaction escalation. You could also achieve this by simply passing a connection object from method to method, but this isn't as clean, IMO.
I have a web application that issues requests to 3 databases in the DAL. I'm writing some integration tests to make sure that the overall functionality round trip actually does what i expect it to do. This is completely separate from my unit tests, just fyi.
The way I was intending to write these tests were something to the effect of this
[Test]
public void WorkflowExampleTest()
{
(using var transaction = new TransactionScope())
{
Presenter.ProcessWorkflow();
}
}
The Presenter in this case has already been set up. The problem comes into play inside the ProcessWorkflow method because it calls various Repositories which in turn access different databases, and my sql server box does not have MSDTC enabled, so I get an error whenever I try to either create a new sql connection, or try to change a cached connection's database to target a different one.
For brevity the Presenter resembles something like:
public void ProcessWorkflow()
{
LogRepository.LogSomethingInLogDatabase();
var l_results = ProcessRepository.DoSomeWorkOnProcessDatabase();
ResultsRepository.IssueResultstoResultsDatabase(l_results);
}
I've attempted numerous things to solve this problem.
Caching one active connection at all times and changing the target database
Caching one active connection for each target database (this was kind of useless because pooling should do this for me, but I wanted to see if I got different results)
Adding additional TransactionScopes inside each repository so that they have their own transactions using the TransactionScopeOption "RequiresNew"
My 3rd attempt on the list looks something like this:
public void LogSomethingInLogDatabase()
{
using (var transaction =
new TransactionScope(TransactionScopeOption.RequiresNew))
{
//do some database work
transaction.Complete();
}
}
And actually the 3rd thing I tried actually got the unit tests to work, but all the transactions that completed actually HIT my database! So that was an utter failure, since the entire point is to NOT effect my database.
My question therefore is, what other options are out there to accomplish what I'm trying to do given the constraints I've laid out?
EDIT:
This is what "//do some database work" would look like
using (var l_context = new DataContext(TargetDatabaseEnum.SomeDatabase))
{
//use a SqlCommand here
//use a SqlDataAdapter inside the SqlCommand
//etc.
}
and the DataContext itself looks something like this
public class DataContext : IDisposable
{
static int References { get; set; }
static SqlConnection Connection { get; set; }
TargetDatabaseEnum OriginalDatabase { get; set; }
public DataContext(TargetDatabaseEnum database)
{
if (Connection == null)
Connection = new SqlConnection();
if (Connection.Database != DatabaseInfo.GetDatabaseName(database))
{
OriginalDatabase =
DatabaseInfo.GetDatabaseEnum(Connection.Database);
Connection.ChangeDatabase(
DatabaseInfo.GetDatabaseName(database));
}
if (Connection.State == ConnectionState.Closed)
{
Connection.Open() //<- ERROR HAPPENS HERE
}
ConnectionReferences++;
}
public void Dispose()
{
if (Connection.State == ConnectionState.Open)
{
Connection.ChangeDatabase(
DatabaseInfo.GetDatabaseName(OriginalDatabase));
}
if (Connection != null && --ConnectionReferences <= 0)
{
if (Connection.State == ConnectionState.Open)
Connection.Close();
Connection.Dispose();
}
}
}
Set Enlist=false on connection string to avoid auto enlistment on transaction.
Manually enlist connection as participants in transaction scope. (http://msdn.microsoft.com/en-us/library/ms172153%28v=VS.80%29.aspx)
Ok, I found a way around this issue. The only reason I'm doing it this way is because I couldn't find ANY other way to fix this problem, and because it's in my integration tests, so I'm not concerned about this having adverse effects in production code.
I had to add a property to my DataContext to act as a flag to keep track of whether or not to dispose of the connection object when my DataContext is being disposed. This way, the connection is kept alive throughout the entire transaction scope, and therefore no longer bothers DTC
Here's sample of my new Dispose:
internal static bool SupressConnectionDispose { get; set; }
public void Dispose()
{
if (Connection.State == ConnectionState.Open)
{
Connection.ChangeDatabase(
DatabaseInfo.GetDatabaseName(OriginalDatabase));
}
if (Connection != null
&& --ConnectionReferences <= 0
&& !SuppressConnectionDispose)
{
if (Connection.State == ConnectionState.Open)
Connection.Close();
Connection.Dispose();
}
}
this allows my integration tests to take the form of:
[Test]
public void WorkflowExampleTest()
{
(using var transaction = new TransactionScope())
{
DataContext.SuppressConnectionDispose = true;
Presenter.ProcessWorkflow();
}
}
I would not recommend utilizing this in production code, but for integration tests I think it is appropriate. Also keep in mind this only works for connections where the server is always the same, as well as the user.
I hope this helps anyone else who runs into the same problem I had.
If you don't want to use MSDTC you can use SQL transactions directly.
See SqlConnection.BeginTransaction().