Nested Transactions with TransactionScope - c#

If you have somehting like this:
IBinaryAssetStructureRepository rep = new BinaryAssetStructureRepository();
var userDto = new UserDto { id = 3345 };
var dto = new BinaryAssetBranchNodeDto("name", userDto, userDto);
using (var scope1 = new TransactionScope())
{
using(var scope2 = new TransactionScope())
{
//Persist to database
rep.CreateRoot(dto, 1, false);
scope2.Complete();
}
scope1.Dispose();
}
dto = rep.GetByKey(dto.id, -1, false);
Will the inner TransactionScope scope2 also be rolled back?

Yes.
The inner transaction is enrolled in the same scope of the outer one, and the whole thing will rollback. This is the case, as you didn't enroll the inner transaction as a new one using TransactionScopeOption.RequiresNew.

See here for an explanation on this subject: http://web.archive.org/web/20091012162649/http://www.pluralsight.com/community/blogs/jimjohn/archive/2005/06/18/11451.aspx.
Also, note that the scope1.Dispose is redundant since scope1 will be automatically disposed at the end of the using block that declares it.

Related

C# What is the point of the using statement? [duplicate]

This question already has answers here:
Why do we need Dispose() method on some object? Why doesn't the garbage collector do this work?
(2 answers)
What are the uses of "using" in C#?
(29 answers)
Closed 10 months ago.
I'm not talking about references to assemblies, rather the using statement within the code.
For example what is the difference between this:
using (DataReader dr = .... )
{
...stuff involving data reader...
}
and this:
{
DataReader dr = ...
...stuff involving data reader...
}
Surely the DataReader is cleaned up by the garbage collector when it goes out of scope anyway?
The point of a using statement is that the object you create with the statement is implicitly disposed at the end of the block. In your second code snippet, the data reader never gets closed. It's a way to ensure that disposable resources are not held onto any longer than required and will be released even if an exception is thrown. This:
using (var obj = new SomeDisposableType())
{
// use obj here.
}
is functionally equivalent to this:
var obj = new SomeDisposableType();
try
{
// use obj here.
}
finally
{
obj.Dispose();
}
You can use the same scope for multiple disposable objects like so:
using (var obj1 = new SomeDisposableType())
using (var obj2 = new SomeOtherDisposableType())
{
// use obj1 and obj2 here.
}
You only need to nest using blocks if you need to interleave other code, e.g.
var table = new DataTable();
using (var connection = new SqlConnection("connection string here"))
using (var command = new SqlCommand("SQL query here", connection))
{
connection.Open();
using (var reader = command.ExecuteReader()
{
table.Load(reader);
}
}
Such using statement automatically disposes the object obtained at the end of scope.
You may refer to this documentation for further details.

Why instantiate new DbContext for each step of test

In the Entity Framework Core documentation on Testing with SQLite, the sample code instantiates a new DbContext for each step of a test. Is there a reason for doing this?
// Copied from the docs:
[Fact]
public void Add_writes_to_database()
{
// In-memory database only exists while the connection is open
var connection = new SqliteConnection("DataSource=:memory:");
connection.Open();
try
{
var options = new DbContextOptionsBuilder<BloggingContext>()
.UseSqlite(connection)
.Options;
// Create the schema in the database
using (var context = new BloggingContext(options))
{
context.Database.EnsureCreated();
}
// Run the test against one instance of the context
using (var context = new BloggingContext(options))
{
var service = new BlogService(context);
service.Add("http://sample.com");
context.SaveChanges();
}
// Use a separate instance of the context to verify correct data was saved to database
using (var context = new BloggingContext(options))
{
Assert.Equal(1, context.Blogs.Count());
Assert.Equal("http://sample.com", context.Blogs.Single().Url);
}
}
finally
{
connection.Close();
}
}
// Why not do this instead:
[Fact]
public void Add_writes_to_database()
{
// In-memory database only exists while the connection is open
var connection = new SqliteConnection("DataSource=:memory:");
connection.Open();
try
{
var options = new DbContextOptionsBuilder<BloggingContext>()
.UseSqlite(connection)
.Options;
// Create the schema in the database
using (var context = new BloggingContext(options))
{
context.Database.EnsureCreated();
var service = new BlogService(context);
service.Add("http://sample.com");
context.SaveChanges();
Assert.Equal(1, context.Blogs.Count());
Assert.Equal("http://sample.com", context.Blogs.Single().Url);
}
}
finally
{
connection.Close();
}
}
Why not instantiate the context once, and use that instance throughout the entire test method, as shown in the second code sample?
Because that's how contexts should be used. They should be created per request and disposed of.
One practical reason is to ensure that you're going back to the data source each time instead of just looking at state within the context.

Single ScopeTransaction with multiple DbContexts

I have searched a lot on the Internet without finding similar case. I have one TransactionScope with several DbContext.
I want to commit changes to the database only in case all context have saved changes successfully.
But the problem I'm facing is that I had to call generalContext.SaveChanges in the middle of the code as the changes took place on data was retrieved by generalContext sometime earlier, but I noticed the changes are committed right away after calling generalContext.SaveChanges().
What problem did I do?
I have tried TransactionScopeOption.Required and TransactionScopeOption.RequiresNew, without being able to solve the problem
var transactionOptions = new TransactionOptions();
transactionOptions.Timeout = TimeSpan.FromMinutes(30);
using (var scope = new TransactionScope(TransactionScopeOption.Required, transactionOptions))
using (var generalContext = new CoreEntities())
{
try
{
using (var subContext = new CoreEntities())
{
// the problem is here, that the changes are committed!!!
generalContext.SaveChanges();
subContext.SaveChanges();
}
scope.Complete();
}
catch
{
scope.Dispose();
}
}

Prepared statement caching issue in Cassandra Csharp driver

I believe I have found a bug with the logic of how a prepared statement is cached in the StatementFactory in the Cassandra csharp driver (version 2.7.3). Here is the use case.
Guid key = Guid.NewGuid(); // your key
ISession session_foo = new Session("foo"); //This is pseudo code
ISession session_bar = new Session("bar");
var foo_mapper = new Mapper(session_foo); //table foo_bar
var bar_mapper = new Mapper(session_bar); //table foo_bar
await Task.WhenAll(
foo_mapper.DeleteAsync<Foo>("WHERE id = ?", key),
bar_mapper.DeleteAsync<Bar>("WHERE id = ?", key));
We have found that after running this deletes, only the first request is succeeding. After diving in the the source code of StatementFactory
public Task<Statement> GetStatementAsync(ISession session, Cql cql)
{
if (cql.QueryOptions.NoPrepare)
{
// Use a SimpleStatement if we're not supposed to prepare
Statement statement = new SimpleStatement(cql.Statement, cql.Arguments);
SetStatementProperties(statement, cql);
return TaskHelper.ToTask(statement);
}
return _statementCache
.GetOrAdd(cql.Statement, session.PrepareAsync)
.Continue(t =>
{
if (_statementCache.Count > MaxPreparedStatementsThreshold)
{
Logger.Warning(String.Format("The prepared statement cache contains {0} queries. Use parameter markers for queries. You can configure this warning threshold using MappingConfiguration.SetMaxStatementPreparedThreshold() method.", _statementCache.Count));
}
Statement boundStatement = t.Result.Bind(cql.Arguments);
SetStatementProperties(boundStatement, cql);
return boundStatement;
});
}
You can see that the cache only uses the cql statement. In our case, we have the same table names in different keyspaces (aka sessions). Our cql statement in both queries look the same. ie DELETE FROM foo_bar WHERE id =?.
If I had to guess, I would say that a simple fix would be to combine the cql statement and keyspace together as the cache key.
Has anyone else run into this issue before?
As a simple workaround, I am skipping the cache by using the DoNotPrepare
await _mapper.DeleteAsync<Foo>(Cql.New("WHERE id = ?", key).WithOptions(opt => opt.DoNotPrepare()));
I also found an open issue with Datastax
There is an open ticket to fix this behaviour.
As a workaround, you can use a different MappingConfiguration instances when creating the Mapper:
ISession session1 = cluster.Connect("ks1");
ISession session2 = cluster.Connect("ks2");
IMapper mapper1 = new Mapper(session1, new MappingConfiguration());
IMapper mapper2 = new Mapper(session2, new MappingConfiguration());
Or, you can reuse a single ISession instance and fully-qualify your queries to include the keyspace.
MappingConfiguration.Global.Define(
new Map<Foo>()
.TableName("foo")
.KeyspaceName("ks1"),
new Map<Bar>()
.TableName("bar")
.KeyspaceName("ks2"));
ISession session = cluster.Connect();
IMapper mapper = new Mapper(session);

Using TransactionScope in Service Layer for UnitOfWork operations

Is my approach right to bundle all 3 dataprovider.GetXXX methods in a TransactionScope in the service layer as UnitOfWork?
Would you do something different?
From where does the TransactionScpe ts know the concrete ConnectionString?
Should I get the Transaction object from my connection and pass this Transaction objekt to the constructor of the TransactionScope ?
Service Layer like AdministrationService.cs
private List<Schoolclass> GetAdministrationData()
{
List<Schoolclass> schoolclasses = null
using (TransactionScope ts = new TransactionScope())
{
schoolclasses = _adminDataProvider.GetSchoolclasses();
foreach (var s in schoolclasses)
{
List<Pupil> pupils = _adminDataProvider.GetPupils(s.Id);
s.Pupils = pupils;
foreach (var p in pupils)
{
List<Document> documents = _documentDataProvider.GetDocuments(p.Id);
p.Documents = documents;
}
}
ts.Complete();
}
return schoolclasses;
}
Sample how any of those 3 methods in the DataProvider could look like:
public List<Schoolclass> GetSchoolclassList()
{
// used that formerly without TransactionSCOPE => using (var trans = DataAccess.ConnectionManager.BeginTransaction())
using (var com = new SQLiteCommand(DataAccess.ConnectionManager))
{
com.CommandText = "SELECT * FROM SCHOOLCLASS";
var schoolclasses = new List<Schoolclass>();
using (var reader = com.ExecuteReader())
{
Schoolclass schoolclass = null;
while (reader.Read())
{
schoolclass = new Schoolclass();
schoolclass.SchoolclassId = Convert.ToInt32(reader["schoolclassId"]);
schoolclass.SchoolclassCode = reader["schoolclasscode"].ToString();
schoolclasses.Add(schoolclass);
}
}
// Used that formerly without TransactionSCOPE => trans.Commit();
return schoolclasses;
}
}
This looks fine - that's what TransactionScope is there for, to provide transaction control in your code (and this is a common pattern for UoW).
From where does the TransactionScpe ts know the concrete ConnectionString?
It doesn't. That depends on your data access layer and doesn't really mean much to TransactionScope. What TransactionScope does is create a transaction (which will by default be a light-weight one) - if your data access spans several databases, the transaction will automatically be escalated to a distributed transaction. It uses MSDTC under the hood.
Should I get the Transaction object from my connection and pass this Transaction objekt to the constructor of the TransactionScope ?
No, no, no. See the above. Just do what you are doing now. There is no harm in nesting TransactionScopes.

Categories

Resources