I always want to try to use TransactionScope but I just can't figure out what people see about it that is useful. So let's take an example:
using(TransactionScope tran = new TransactionScope()) {
CallAMethodThatDoesSomeWork1();
CallAMethodThatDoesSomeWork2();
tran.Complete();
}
So the most basic question: How do I write "CallAMethodThatDoesSomeWork1()" so that it knows how to roll its actions back if let's say "CallAMethodThatDoesSomeWork2()" throws an exception?
The code within the methods you call need to be transaction aware and enlist in the active transaction. This means creating or using classes which are resource managers (see Implement Your Own Resource Manager.
You do this by implementing IEnlistmentNotification and enlisting in the transaction. When the transaction is completed, the transaction manager will call methods as defined on that interface so that your code can do/undo the work.
Related
I have many classes all associated to a specific table. On a higher level i have parent transactions that call various methods so the child methods must be able to work inside a transient transaction and often it is not known beforehand if a method at some stage will be part of a transaction or not since we try to keep them generic.
Then in some cases we need Dapper queries e.g. to turn identity on / off . I understood that Dapper requires passing a Transaction as parameter otherwise it will not be enlisted in the transaction (turns out i was wrong see below).
The DbContext(Pooling) is set per "component/dll" so since a connection is only enlisted when it its opened inside a transaction a scope is used of context to ensure it is opened for this transaction. Furthmore that helps when calling these same methods from e.g. HealthChecks who otherwise will complain about too many open connections when many of them call the same connections opened by services. Have this scope in methods helps also with calling these methods in parallel work so that they are more nicely run in parallel threads.
In other words in this way these methods can be called from these parents which can be parallel job parents or singletons requiring a service scope or parent transactions that require a transients hierarchy.
The problem was: For some reason transaction in the following transaction is always null.
try {
using TransactionScope scope = new TransactionScope(TransactionScopeOption.Required,
System.TimeSpan.FromMinutes(10), TransactionScopeAsyncFlowOption.Enabled);
using Context localcontext = new Context(new DbContextOptionsBuilder<Context>()
.UseSqlServer(_options.ConnectionString).Options);
// just for safety:
localcontext.Database.GetDbConnection().Open();
// the following line is only for dapper input:
IDbContextTransaction transaction = localcontext.Database.CurrentTransaction;
await localcontext.Database.GetDbConnection()
.ExecuteAsync("SET IDENTITY_INSERT [dbo].[Whatever] ON",
null, (System.Data.IDbTransaction)transaction);
}
(which i took from here: Pass current transaction to DbCommand and here https://github.com/zzzprojects/Dapper.Transaction )
UPDATE / SOLUTION:
Ok. So... when using transaction scope the transaction parameter does not have to be passed to Dapper to ensure it enlists in the transaction. That was the clue.
Remove this line
IDbContextTransaction transaction = localcontext.Database.CurrentTransaction;
If there's an active TrasnactionScope your SqlConnection will be automatically enlisted in it. The whole point of TransactionScope is that your data access methods can be completely free of transaction handling. Then in some outer business layer or controller method, the transaction is orchestrated.
The reason CurrentTransaction is null is that there are two different ways to handle transactions. If you want the current System.Transactions.Transaction, you get it with System.Transactions.Transaction.Current.
Stepping back, there are 3 separate ways to manage transactions with SqlConnection.
TSQL Transactions: You can use TSQL API directly issuing BEGIN TRAN, COMMIT TRAN, etc.
ADO.NET Transactions: SqlConnection.BeginTrasaction, IDbTransaction , SqlTransaction, etc. This is a wrapper over the TSQL API, and is a PITA because it introduces a useless requirement to pass the SqlTransaction to each SqlCommand that you want to enlist in the Transaction. But enlisting TSQL commands in the current transaction is not optional, and never has been. And that's a pain because methods that user SqlCommand may not know whether there is a transaction. Dapper and EF both wrap this API in their transaction handling methods.
System.Transactions Transactions: Partly because of this System.Transactions was introduced in .NET 2.0 as a new and unified way to handle transactions in .NET, and SqlClient added support for it. The main innovation of System.Transactions was adding "ambient" transactions. So code could be agnositc about whether there's a transaction and the right thing will just happen. When opening a SqlConnection if there is a current Transaction, the SqlConnection will be enlisted in it, and the changes made using the SqlConnection will not be committed until the Transaction is committed. And there is no need for your ADO.NET code to know about the Transaction. Dapper and EF are both built on top of ADO.NET and SqlClient, so this all just works.
It's easier to explain what's wrong by showing what the code should be:
using(var connection=new SqlConnection(_connectionString))
{
await connection.ExecuteAsync("SET IDENTITY_INSERT [dbo].[Whatever] ON");
}
Where ExecuteAsync comes from Dapper.
There's no reason to create a transaction, much less a transaction scope, to execute a single command.
There's no reason to create a DbContext just to open a connection to the database either, or to execute raw SQL commands. DbContext isn't a database connection, it's job is to Map Objects to Relational data. There are no objects involved here.
To execute multiple commands there's no reason to use multiple connections. Just execute the commands one after the other. If it's really necessary, use an explicit database transaction around those commands. Or create the connection inside a single transaction scope.
Let's say you have an array with those commands, eg something read from a script file :
string[] commands=new[]{...};
using(var connection=new SqlConnection(_connectionString))
{
await connection.OpenAsync();
using (var transaction = connection.BeginTransaction())
{
foreach(var sql in commands)
{
await connection.ExecuteAsync(sql,transaction:transaction);
}
transaction.Commit();
}
}
Doing the same thing using a TransactionScope only requires opening the connection inside the transaction scope.
string[] commands=new[]{...};
using( var scope = new TransactionScope(TransactionScopeOption.Required,
System.TimeSpan.FromMinutes(10), TransactionScopeAsyncFlowOption.Enabled)
using(var connection=new SqlConnection(_connectionString))
{
await connection.OpenAsync();
foreach(var sql in commands)
{
await connection.ExecuteAsync(sql);
}
scope.Complete();
}
Good day,
I have a method which contains several database commits (about 15) in a different classes, I need a way to make all the db changes only if the the method did not throw any exception, I am thinking about using a Transaction Scope, and my question is weather i can use a single instance of that Transaction Scope in all the different classes and if not what is the best practice to perform a rollback in case of an exception?
Thanks!
You usually don't need to perform rollback explicitly, methods that perform database operations might not even be aware of ambient transaction (that is - you don't need to pass TransactionScope to them). Just do:
using (var tran = new TransactionScope()) {
FirstDatabaseOperation();
SecondDatabaseOperation();
// etc
tran.Complete();
}
If exception happens in any operation - transaction will be rolled back for you, because TransactionScope will be disposed.
I try to use TransactionScope in this way:
using (TransactionScope ts = new TransactionScope())
{
DAL.delete();
DAL.create();
ts.Complete();
}
where I have independent DAL(Data Access Layer) module to do database operations. Each operation like delete(), create() is atomic, i.e. They are all committed by calling.
I have tried this code in order to wrap this two operation together as a transaction. And no matter whether I wrote ts.Complete();, they are all committed to database and no rollback happens.
How can I do in this case? Thanks.
The TS is creating an ambient transaction which your DAL layer will automatically pick up on. Your code implies you want the delete and create to be considered an atomic operation. If you want them independent create another TS block after the first and move your create statement there.
If you want to rollback you need to leave the using block without called Complete on the scope, normally this happens because one of your DAL methods throws an exception.
I want to create a transaction, writing some data in a sub-transaction, reading the data back, and rollback the transaction.
using(var transaction = new TransactionScope())
{
using(var transaction = new TransactionScope())
{
// save data via LINQ / DataContext
transaction.Complete();
}
// Get back for assertions
var tempItem = // read data via LINQ / DataContext THROWS EXCEPTION
}
But while reading I get "System.Transactions.TransactionException : The operation is not valid for the state of the transaction.".
How should I set transaction properties to avoid this?
This exception cannot be debugged without the full stack trace. It has a different meaning depending on the context. Usually it means you're doing something you shouldn't inside the transaction, but without seeing db calls or stack trace all anybody can do is guess. Some common causes I know of (and this is by no means comprehensive I'm sure) include:
Accessing multiple data sources (ie different connection strings) within a nested TransactionScope. This causes promotion to a distributed transaction and if you do are not running DTC it will fail. The answer is usually not to enable DTC, but to clean up your transaction or wrap the other data access with a new TransactionScope(TransactionOptions.RequiresNew).
Unhandled exceptions within the TransactionScope.
Any operation that violates the isolation level, such as trying to read rows just inserted/updated.
SQL deadlocks; transactions can even deadlock themselves in certain cases, but if #1 applies, isolating other ops into new transactions can cause deadlocks if you're not careful.
Transaction timeouts.
Any other error from the database.
I definitely don't know every possible cause, but if you post the complete stack trace and the actual db calls in your code I'll take a look and let you know if I see anything.
You have two nested TransactionScope objects??
And no try catch block.
http://msdn.microsoft.com/en-us/library/system.transactions.transactionscope.aspx
I think you'll find the specific answer is that you cannot complete a transaction that hasn't begun anything, it's in an invalid state. Do you actually have any code where your LINQ comments are? does a connection actually get established?
I have a one-to-many relationship within my class model.
Example:
A single role can have many permissions attached to it. so have two table one from the role and one for the permissions for each role.
Now i have a role class which in turn has a permission list as a member of that class. When i need to do an update, i instantiate a transactionscope object, and do an update for the role. After that is done and with the transactionscope still open, i open another transactionscope for each permission in the list and close it immediately after the update has been done.
The update for the role works just fine
But, now the problem is that when it tries to instantiate a transactionscope for the first permission on the list it trows an error saying:
Error: Time-out interval must be less than 2^32-2. Parameter name: dueTm
Ok, i found a solution.
Scenario:
I have two base classes for my Business Layer, BaseBLL and BaseListBLL. Each of these classes provided a method that calls an astract method that must be implemented in you concrete class. These classes started a TransactionScope before handing off to the abstract class to perform tasks like update or insert.
Note: After the BaseListBLL starts off a TransactionScope it calls the method in the BaseBLL that also starts it's own transaction. This is because, we'll be inserting or updating a list of classes derived from the BaseBLL which all together are contained in a class derived from BaseListBLL.
Now for example if we are to update a role in the roles table only one transaction would be started. But now the complexity comes when we have a role object having another class that derives directly from the BaseListBLL.
A transaction would be started for the list first and another for each BaseBLL object in the list to do all the insert or update.
So now, after the first insert, the error:
Error: Time-out interval must be less than 2^32-2. Parameter name: dueTm
is thrown.
What i did was
For the list class, i set the TranscationScope object this way:
TransactionScope ts = new TransactionScope(TransactionScopeOption.Suppress)
and for the other class
TransactionScope ts = new TransactionScope(TransactionScopeOption.Required, TimeSpan.MaxValue)
The conclusion i have derived is that the error is being thrown because i was using the same transaction created by the BaseBLL derived class from the list. I had to suppress the ambient transaction when it was time to do an update of the list of items.
I hope this helps someone else.
We had a similar issue and the answer to this is simple and is found in this thread.
What's happening is that the outer TransactionScope and the inner TransactionScope have a Timeout mismatch. The inner TransactionScope is trying to set itself as something greater than the outer TransactionScope.
How does this happen? The answer is that we were setting the Timeout to be equal to TimeSpan.MaxValue...BUT...the limit for this is actually TransactionManager.MaximumTimeout. So when the inner TransactionScope sets the timeout it blows up because it is a value greater than the outer TransactionScope.
This caused problems when nesting transactions:
public static TransactionScope CreateTransactionScope()
{
var transactionOptions = new TransactionOptions
{
IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted,
Timeout = TimeSpan.MaxValue
};
return new TransactionScope(TransactionScopeOption.Required, transactionOptions);
}
This works fine when nesting transactions:
public static TransactionScope CreateTransactionScope()
{
var transactionOptions = new TransactionOptions
{
IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted,
Timeout = TransactionManager.MaximumTimeout
};
return new TransactionScope(TransactionScopeOption.Required, transactionOptions);
}
The person who answered claims this is a known bug. In my opinion the fix would simply be to 'fail fast' and throw an exception if the Timeout is set to a value greater than TransactionManager.MaximumTimeout instead of throwing an exception if the inner TransactionScope tries to set a value greater than the outer TransactionScope.
The source code of the class System.Transactions.TransactionScope shows that it will create a System.Threading.Timer instance in some situations. The timer requires that dueTime.TotalMilliseconds should not be greater than 0xfffffffeL, but the value passed to dueTime is TimeSpan.MaxValue (0x7fffffffffffffffL in ticks, namely, 0x346DC5D638865 in milliseconds). That's why the exception is thrown.
So I think the better solution is to give the TransactionScope a smaller timeout.