C# NetCore Transaction null - c#

I have many classes all associated to a specific table. On a higher level i have parent transactions that call various methods so the child methods must be able to work inside a transient transaction and often it is not known beforehand if a method at some stage will be part of a transaction or not since we try to keep them generic.
Then in some cases we need Dapper queries e.g. to turn identity on / off . I understood that Dapper requires passing a Transaction as parameter otherwise it will not be enlisted in the transaction (turns out i was wrong see below).
The DbContext(Pooling) is set per "component/dll" so since a connection is only enlisted when it its opened inside a transaction a scope is used of context to ensure it is opened for this transaction. Furthmore that helps when calling these same methods from e.g. HealthChecks who otherwise will complain about too many open connections when many of them call the same connections opened by services. Have this scope in methods helps also with calling these methods in parallel work so that they are more nicely run in parallel threads.
In other words in this way these methods can be called from these parents which can be parallel job parents or singletons requiring a service scope or parent transactions that require a transients hierarchy.
The problem was: For some reason transaction in the following transaction is always null.
try {
using TransactionScope scope = new TransactionScope(TransactionScopeOption.Required,
System.TimeSpan.FromMinutes(10), TransactionScopeAsyncFlowOption.Enabled);
using Context localcontext = new Context(new DbContextOptionsBuilder<Context>()
.UseSqlServer(_options.ConnectionString).Options);
// just for safety:
localcontext.Database.GetDbConnection().Open();
// the following line is only for dapper input:
IDbContextTransaction transaction = localcontext.Database.CurrentTransaction;
await localcontext.Database.GetDbConnection()
.ExecuteAsync("SET IDENTITY_INSERT [dbo].[Whatever] ON",
null, (System.Data.IDbTransaction)transaction);
}
(which i took from here: Pass current transaction to DbCommand and here https://github.com/zzzprojects/Dapper.Transaction )
UPDATE / SOLUTION:
Ok. So... when using transaction scope the transaction parameter does not have to be passed to Dapper to ensure it enlists in the transaction. That was the clue.

Remove this line
IDbContextTransaction transaction = localcontext.Database.CurrentTransaction;
If there's an active TrasnactionScope your SqlConnection will be automatically enlisted in it. The whole point of TransactionScope is that your data access methods can be completely free of transaction handling. Then in some outer business layer or controller method, the transaction is orchestrated.
The reason CurrentTransaction is null is that there are two different ways to handle transactions. If you want the current System.Transactions.Transaction, you get it with System.Transactions.Transaction.Current.
Stepping back, there are 3 separate ways to manage transactions with SqlConnection.
TSQL Transactions: You can use TSQL API directly issuing BEGIN TRAN, COMMIT TRAN, etc.
ADO.NET Transactions: SqlConnection.BeginTrasaction, IDbTransaction , SqlTransaction, etc. This is a wrapper over the TSQL API, and is a PITA because it introduces a useless requirement to pass the SqlTransaction to each SqlCommand that you want to enlist in the Transaction. But enlisting TSQL commands in the current transaction is not optional, and never has been. And that's a pain because methods that user SqlCommand may not know whether there is a transaction. Dapper and EF both wrap this API in their transaction handling methods.
System.Transactions Transactions: Partly because of this System.Transactions was introduced in .NET 2.0 as a new and unified way to handle transactions in .NET, and SqlClient added support for it. The main innovation of System.Transactions was adding "ambient" transactions. So code could be agnositc about whether there's a transaction and the right thing will just happen. When opening a SqlConnection if there is a current Transaction, the SqlConnection will be enlisted in it, and the changes made using the SqlConnection will not be committed until the Transaction is committed. And there is no need for your ADO.NET code to know about the Transaction. Dapper and EF are both built on top of ADO.NET and SqlClient, so this all just works.

It's easier to explain what's wrong by showing what the code should be:
using(var connection=new SqlConnection(_connectionString))
{
await connection.ExecuteAsync("SET IDENTITY_INSERT [dbo].[Whatever] ON");
}
Where ExecuteAsync comes from Dapper.
There's no reason to create a transaction, much less a transaction scope, to execute a single command.
There's no reason to create a DbContext just to open a connection to the database either, or to execute raw SQL commands. DbContext isn't a database connection, it's job is to Map Objects to Relational data. There are no objects involved here.
To execute multiple commands there's no reason to use multiple connections. Just execute the commands one after the other. If it's really necessary, use an explicit database transaction around those commands. Or create the connection inside a single transaction scope.
Let's say you have an array with those commands, eg something read from a script file :
string[] commands=new[]{...};
using(var connection=new SqlConnection(_connectionString))
{
await connection.OpenAsync();
using (var transaction = connection.BeginTransaction())
{
foreach(var sql in commands)
{
await connection.ExecuteAsync(sql,transaction:transaction);
}
transaction.Commit();
}
}
Doing the same thing using a TransactionScope only requires opening the connection inside the transaction scope.
string[] commands=new[]{...};
using( var scope = new TransactionScope(TransactionScopeOption.Required,
System.TimeSpan.FromMinutes(10), TransactionScopeAsyncFlowOption.Enabled)
using(var connection=new SqlConnection(_connectionString))
{
await connection.OpenAsync();
foreach(var sql in commands)
{
await connection.ExecuteAsync(sql);
}
scope.Complete();
}

Related

multiple TransactionScopes in in different classes C#

Good day,
I have a method which contains several database commits (about 15) in a different classes, I need a way to make all the db changes only if the the method did not throw any exception, I am thinking about using a Transaction Scope, and my question is weather i can use a single instance of that Transaction Scope in all the different classes and if not what is the best practice to perform a rollback in case of an exception?
Thanks!
You usually don't need to perform rollback explicitly, methods that perform database operations might not even be aware of ambient transaction (that is - you don't need to pass TransactionScope to them). Just do:
using (var tran = new TransactionScope()) {
FirstDatabaseOperation();
SecondDatabaseOperation();
// etc
tran.Complete();
}
If exception happens in any operation - transaction will be rolled back for you, because TransactionScope will be disposed.

TransactionScope with several transactions inside

I try to use TransactionScope in this way:
using (TransactionScope ts = new TransactionScope())
{
DAL.delete();
DAL.create();
ts.Complete();
}
where I have independent DAL(Data Access Layer) module to do database operations. Each operation like delete(), create() is atomic, i.e. They are all committed by calling.
I have tried this code in order to wrap this two operation together as a transaction. And no matter whether I wrote ts.Complete();, they are all committed to database and no rollback happens.
How can I do in this case? Thanks.
The TS is creating an ambient transaction which your DAL layer will automatically pick up on. Your code implies you want the delete and create to be considered an atomic operation. If you want them independent create another TS block after the first and move your create statement there.
If you want to rollback you need to leave the using block without called Complete on the scope, normally this happens because one of your DAL methods throws an exception.

Use SqlTransaction & IsolationLevel for lengthy read operation?

I am executing several long-running SQL queries as part of a reporting module. These queries are constructed dynamically at run-time. Depending on the user's input, they may be single or multi-statement, have one or more parameters and operate on one or more database tables - in other words, their form cannot be easily anticipated.
Currently, I am just executing these statements on an ordinary SqlConnection, i.e.
using (SqlConnection cn = new SqlConnection(ConnectionString)) {
cn.Open();
// command 1
// command 2
// ...
// command N
}
Because these queries (really query batches) can take a while to execute, I am concerned about locks on tables holding up reads/writes for other users. It is not a problem if the data for these reports changes during the execution of the batch; the report queries should never take precedence over other operations on those tables, nor should they lock them.
For most long-running/multi-statement operations that involve modifying data, I would use transactions. The difference here is that these report queries are not modifying any data. Would I be correct in wrapping these report queries in an SqlTransaction in order to control their isolation level?
i.e:
using (SqlConnection cn = new SqlConnection(ConnectionString)) {
cn.Open();
using (SqlTransaction tr = cn.BeginTransaction(IsolationLevel.ReadUncommitted)) {
// command 1
// command 2
// ...
// command N
tr.Commit();
}
}
Would this achieve my desired outcome? Is it correct to commit a transaction, even though no data has been modified? Is there another approach?
Another approach might be to issue, against the connection:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
which achieves the same intent, without messing with a transaction. Or you could use the WITH(NOLOCK) hint on the tables in your query, which has the advantage of not changing the connection at all.
Importantly, note that (unusually): however it gets changed (transaction, transaction-scope, explicit SET, etc), the isolation level is not reset between uses of the same underlying connection when fetching it from the pool. This means that if your code changes the isolation level (directly or indirectly), then none of your code knows what the isolation level of a new connection is:
using(var conn = new SqlConnection(connectionString)) {
conn.Open();
// isolation level here could be **ANYTHING**; it could be the default
// if it is a brand new connection, or could be whatever the last
// connection was when it finished
}
Which makes the WITH(NOLOCK) quite tempting.
I agree with Marc, but alternatively you could use the NOLOCK query hint on the affected tables. This would give you the ability to control it on a table by table level.
The problem with running any queries without taking shared locks is that you leave yourself open to "non-deterministic" results, and business decisions should not be made on this data.
A better approach may be to investigate either SNAPSHOT or READ_COMMITED_SNAPSHOT isolation levels. These give you protection against transactional anommolies without taking locks. The trade off is that they increase IO against TempDB. Either of these levels can be applied either to the session as Marc suggested or the table as I suggested.
Hope this helps

Executing select sql query in a suppressed transaction scope

There is a selection sql query(select * from table1 e.g.) by design in a transactionscope in c#. Normally there is an ambient transaction but If I suppress the ambient transaction when executing this sql query, is there a performance gain or not?
using (TransactionScope scope1 = new TransactionScope())
{
// here there are some business processes
using (TransactionScope scope1 = new TransactionScope(TransactionScopeOption.Suppressed)) //Is this suppressed transaction scope useful in terms of performance?
{
//Here there is a select sql query with no lock table hint.
}
}
Yes- since TransactionScope (as you've used it in your sample) uses the Serializable isolation level, suppressing it for certain queries that don't need the protection of that isolation level will prevent read locks from being taken on the DB server (especially if you're using READ COMMITTED SNAPSHOT as your default isolation level). Also, if other things you've done would promote the transaction to the DTC (ie, multiple connections, etc), you'll save the time to coordinate DTCs, which can be slow.

How to save in Transaction?

Consider this scenario; suppose I have WPF window which have four objects bonded to its controls
(Schedule, Customer, Contract, ScheduleDetails and Signer specifically ) which represent four database tables in the backend database. I want when the user request save for the information he/she entered to be joined in atom operation in another words all saving operation be in one transaction so either all operations success or all failed.
My question is what is the most efficient way to represent transaction operation in this scenario
The most efficient is to use BeginTransaction etc on your DbConnection, but this isn't convenient, as you must use the same connection, and each DbCommand needs the transaction setting etc.
The simplest is TransactionScope, and if you are on SQL-Server 2005 or above, you will rarely notice a significant performance difference between this and BeginTransaction:
using(var tran = new TransactionScope()) {
SaveA();
SaveB();
SaveC();
SaveD();
tran.Complete();
}
Here it doesn't matter if SaveA etc use the same connection, as SqlConnection will enlist into a TransactionScope automatically.
Alternatively, let an ORM handle this for you; most will create transactions for saving a group of changes.
One thing to watch; TransactionScope relies on services (DTC) that may not be available on all clients (since you mention WPF, which is client-side).

Categories

Resources