Proper way of using BeginTransaction with Dapper.IDbConnection - c#

Which is the proper way of using BeginTransaction() with IDbConnection in Dapper ?
I have created a method in which i have to use BeginTransaction(). Here is the code.
using (IDbConnection cn = DBConnection)
{
var oTransaction = cn.BeginTransaction();
try
{
// SAVE BASIC CONSULT DETAIL
var oPara = new DynamicParameters();
oPara.Add("#PatientID", iPatientID, dbType: DbType.Int32);
..........blah......blah............
}
catch (Exception ex)
{
oTransaction.Rollback();
return new SaveResponse { Success = false, ResponseString = ex.Message };
}
}
When i executed above method - i got an exception -
Invalid operation. The connection is closed.
This is because you can't begin a transaction before the connection is opened. So when i add this line: cn.Open();, the error gets resolved. But i have read somewhere that manually opening the connection is bad practice!! Dapper opens a connection only when it needs to.
In Entity framework you can handle a transaction using a TransactionScope.
So my question is what is a good practice to handle transaction without adding the line cn.Open()... in Dapper ? I guess there should be some proper way for this.

Manually opening a connection is not "bad practice"; dapper works with open or closed connections as a convenience, nothing more. A common gotcha is people having connections that are left open, unused, for too long without ever releasing them to the pool - however, this isn't a problem in most cases, and you can certainly do:
using(var cn = CreateConnection()) {
cn.Open();
using(var tran = cn.BeginTransaction()) {
try {
// multiple operations involving cn and tran here
tran.Commit();
} catch {
tran.Rollback();
throw;
}
}
}
Note that dapper has an optional parameter to pass in the transaction, for example:
cn.Execute(sql, args, transaction: tran);
I am actually tempted to make extension methods on IDbTransaction that work similarly, since a transaction always exposes .Connection; this would allow:
tran.Execute(sql, args);
But this does not exist today.
TransactionScope is another option, but has different semantics: this could involve the LTM or DTC, depending on ... well, luck, mainly. It is also tempting to create a wrapper around IDbTransaction that doesn't need the try/catch - more like how TransactionScope works; something like (this also does not exist):
using(var cn = CreateConnection())
using(var tran = cn.SimpleTransaction())
{
tran.Execute(...);
tran.Execute(...);
tran.Complete();
}

You should not call
cn.Close();
because the using block will try to close too.
For the transaction part, yes you can use TransactionScope as well, since it is not an Entity Framework related technique.
Have a look at this SO answer: https://stackoverflow.com/a/6874617/566608
It explain how to enlist your connection in the transaction scope.
The important aspect is: connection are automatically enlisted in the transaction IIF you open the connection inside the scope.

Take a look at Tim Schreiber solution which is simple yet powerful and implemented using repository pattern and has Dapper Transactions in mind.
The Commit() in the code below shows it.
public class UnitOfWork : IUnitOfWork
{
private IDbConnection _connection;
private IDbTransaction _transaction;
private IBreedRepository _breedRepository;
private ICatRepository _catRepository;
private bool _disposed;
public UnitOfWork(string connectionString)
{
_connection = new SqlConnection(connectionString);
_connection.Open();
_transaction = _connection.BeginTransaction();
}
public IBreedRepository BreedRepository
{
get { return _breedRepository ?? (_breedRepository = new BreedRepository(_transaction)); }
}
public ICatRepository CatRepository
{
get { return _catRepository ?? (_catRepository = new CatRepository(_transaction)); }
}
public void Commit()
{
try
{
_transaction.Commit();
}
catch
{
_transaction.Rollback();
throw;
}
finally
{
_transaction.Dispose();
_transaction = _connection.BeginTransaction();
resetRepositories();
}
}
private void resetRepositories()
{
_breedRepository = null;
_catRepository = null;
}
public void Dispose()
{
dispose(true);
GC.SuppressFinalize(this);
}
private void dispose(bool disposing)
{
if (!_disposed)
{
if(disposing)
{
if (_transaction != null)
{
_transaction.Dispose();
_transaction = null;
}
if(_connection != null)
{
_connection.Dispose();
_connection = null;
}
}
_disposed = true;
}
}
~UnitOfWork()
{
dispose(false);
}
}

There are two intended ways to use transactions with Dapper.
Pass your IDbTranasction to your normal Dapper call.
Before:
var affectedRows = connection.Execute(sql, new {CustomerName = "Mark"});
After:
var affectedRows = connection.Execute(sql, new {CustomerName = "Mark"}, transaction=tx);
Use the new .Execute extension method that Dapper adds to IDbTransaction itself:
tx.Execute(sql, new {CustomerName = "Mark"});
Note: the variable tx comes from IDbTransaction tx = connection.BeginTransaction();
This is how you're supposed to use transactions with Dapper; neither of them are TranasctionScope.
Bonus Reading
https://stackoverflow.com/a/67474832/12597

Related

Transaction Scope not rolling back with async/await

I'm having an issue getting my transaction scope to rollback while using async/await. Everything works as it should without the transaction scope, but whenever I intentionally cause an exception (duplicate primary key on the insert for 2nd iteration), no rollback (for the update) or any sort of transaction related error occurs.
I should also note that unless "OLE DB Services=-4" is in the connection string, I receive the error:
"The ITransactionLocal interface is not supported by the 'Microsoft.ACE.OLEDB.12.0' provider. Local transactions are unavailable with the current provider."
The code in the button event handler below is just an example for testing the transaction scope. The main goal is to be able to update multiple tables in a loop that's contained in a transaction asynchronously, so I can avoid UI deadlocks and perform rollbacks for any exceptions that may occur during the loop. Any alternatives or suggestions to my problem are appreciated, thanks :)
private async void button1_Click(object sender, EventArgs e)
{
try
{
int customerCount = 150; // First 150 rows of customer table
TransactionScope transaction = null;
using (OleDbConnection dbConn = new OleDbConnection(Provider = Microsoft.ACE.OLEDB.12.0; OLE DB Services=-4; Data Source = " + filePath))
{
dbConn.Open();
using (transaction = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))
{
for (int i = 0; i < customerCount; i++)
{
// Update field indicating customer made an invoice
var taskName = sql.executeAsync("UPDATE Customer SET lastInvoiceDate = #date WHERE customerID = #custID", dbConn,
new OleDbParameter("#date", DateTime.Today),
new OleDbParameter("#custID", i));
// Insert new invoice - Breaks here
var taskInsert = sql.executeAsync("INSERT INTO Invoice VALUES (1, 'thisisatestinvoice', '$100.50')", dbConn);
await Task.WhenAll(taskName, taskInsert);
}
}
// All updates executed properly
transaction.Complete();
}
}
catch (AggregateException exception)
{
foreach (Exception ex in exception.InnerExceptions)
{
MessageBox.Show(ex.Message);
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
public async Task executeAsync(string dbQuery, OleDbConnection dbConn, params OleDbParameter[] parameters)
{
var dbComm = new OleDbCommand(dbQuery, dbConn);
if (parameters != null)
dbComm.Parameters.AddRange(parameters);
await dbComm.ExecuteNonQueryAsync().ConfigureAwait(false);
}
I wasn't able to get the transaction scope to work, and I'm not entirely sure what the issue is, I think it's due to me being on ACE.OLEDB.12.0, but I found another alternative with OleDbTransaction that will rollback if any failures occur.
private async void button1_Click(object sender, EventArgs e)
{
try
{
using (OleDbConnection dbConn = new OleDbConnection(SQLWrapper.CONNECT_STRING))
{
dbConn.Open();
OleDbTransaction dbTrans = dbConn.BeginTransaction();
var taskName = sql.executeAsync("UPDATE Config SET Busname = #name", dbConn, dbTrans,
new OleDbParameter("#name", "name"));
var taskInsert = sql.executeAsync("INSERT INTO Callout VALUES (16, 'ryanistesting')", dbConn, dbTrans);
await Task.WhenAll(taskName, taskInsert);
dbTrans.Commit();
}
}
}
public async Task executeAsync(string dbQuery, OleDbConnection dbConn, OleDbTransaction dbTrans, params OleDbParameter[] parameters)
{
using (var dbComm = new OleDbCommand(dbQuery, dbConn))
{
if (parameters != null)
dbComm.Parameters.AddRange(parameters);
if (dbTrans != null)
dbComm.Transaction = dbTrans;
await dbComm.ExecuteNonQueryAsync().ConfigureAwait(false);
}
}
Maybe you know the answer by now as is' been a while. Same issue happened to me. Thing is that when using ConfigureAwait(false) you are telling the runtime to pick any free thread from the thread pool instead of waiting for the previous used when calling the async method. A different thread is used with a different context, creating another connection under the same transaction scope which was meant for only one connection. Then the transaction gets promoted to a distributed transaction. That was my experience, I had to abandon the idea of using async as it best and wait to for the previos thread to finish by not using configureAwait(false) or Task.WhenAll. Hope that help!

Right way to test connection strings

I tried this code on an array of 3 connection strings without any complaints.My question is, is it okay to invoke multiple dispose calls on the same object?
foreach (var s in strings)
{
connection.ConnectionString = s;
connection.Open();
connection.Close();
connection.Dispose();
}
Here is one way to do it:
bool TestConnection<T>(string connectionString) where T : IDbConnection, new
{
using(T con = new T())
{
con.ConnectionString = connectionString;
connection.Open();
return true;
}
}
Another way to implement connection testing code is with an extension method (note this does not dispose the connection object):
public static Tuple<bool, Exception> TestConnection(this IDbConnection connection)
{
try
{
connection.Open();
connection.Close();
return new Tuple<bool, Exception>(true, null);
}
catch(Exception e)
{
return new Tuple<bool, Exception>(false, e);
}
}
Please note in this version I'm returning a Tuple of bool and Exception so whoever use this code can get the information on why the connection failed, but not have to wrap the call in a try...catch block. Of course, you can choose to simply return a bool just like in the first example, this is just for demonstration purposes.
You should fix your code this way:
foreach (var s in strings)
{
connection.ConnectionString = s;
connection.Open();
connection.Close();
}
Connection doesn't need to dispose, or atleast you shoudln't dispose an object that you want to use again.
Anyway this isn't a good approach.
You should have a
using(DbContext db = new DbContext()){
//SQL Actions
}
for every db relative code, to avoid problems ^^
public bool TestConnection(IDbConnection con)
{
using (con)
{
try
{
con.Open();
con.Close();
return true;
}
catch
{
return false;
}
}
}
It's "ok" with what you are doing (completely different connect everytime with no ran queries) but as Amy said in the comments, it really doesn't get you anything special. Should probably abide by the wisdom of not reusing disposed objects.
Also for SqlConnection, calling Close then dispose is repetitive since it will call its close upon dispose.
Going to throw my code into the mix as well, comments in code:
private bool DBValidCheck(string connection)
{
//Using statement releases the object that implement iDisposable once it exits the block. Takes care of the dispose
using (var connection = new SqlConnection(connection))
{
try
{
connection.Open();
return true;
}
catch
{
return false;
}
}
}

Polymorphic class using lambda

I'm not sure if this is possible. I'm trying to learn a little bit about lambda expressions because of a program that I am writing with my buddy. So he has a Database class that talks with a MS SQL server. I wanted to do some testing on the classes and so made a simple Compact Database that in my TextFixtureSetup I populate the tables (2 tables right now) and then in teardown I delete all the data. his database class uses something like this for his SQL connection
protected void WithConnection(Action<SqlConnection> sqlBlock)
{
try
{
using (SqlConnection connection = new SqlConnection(this.ConnectionString))
{
connection.Open();
sqlBlock(connection);
}
}
catch (Exception ex)
{
Console.WriteLine(#"Exception during database connection: {0}", ex);
}
}
I think I found a post that Jon Skeet answered using almost the same code. https://stackoverflow.com/a/1063112/1329396
I think that this is cool, but my mock database uses SQLCEReader. I did a little research and found that they share a common class System.Data.Common.DbDataReader and it is only one level up. I haven't checked much with it, but i was thinking about if it was possible to use a polymorphic style way to use the WithConnection style of programming that would allow me to use my SQLCeDataReader and his SQLDataReader. Is there a way to do this
Use a factory function. If you can get by with just using a DbConnection for all your Actions you don't need generics:
protected void WithConnection(Action<DbConnection> sqlBlock, Func<DbConnection> dbCxnFactory)
{
try
{
using (DbConnection connection = dbCxnFactory())
{
connection.ConnectionString = this.ConnectionString;
connection.Open();
sqlBlock(connection);
}
}
catch (Exception ex)
{
Console.WriteLine(#"Exception during database connection: {0}", ex);
}
}
If you want to specialize, some actions to SqlConnection only and some to SqlCeConnection only, then you can make it generic:
protected void WithConnection<T>(Action<T> sqlBlock, Func<T> dbCxnFactory) where T : DbConnection
{
try
{
using (T connection = dbCxnFactory())
{
connection.ConnectionString = this.ConnectionString;
connection.Open();
sqlBlock(connection);
}
}
catch (Exception ex)
{
Console.WriteLine(#"Exception during database connection: {0}", ex);
}
}
If you don't want to pass in the factory as a parameter, you can use a generic with new()
protected void WithConnection<TCxn>(Action<TCxn> sqlBlock) where TCxn : DbConnection, new()
{
try
{
using (var cxn = new TCxn())
{
cxn.ConnectionString = this.ConnectionString;
cxn.Open();
sqlBlock(cxn);
}
}
catch (Exception ex)
{
Console.WriteLine(#"Exception during database connection: {0}", ex);
}
}

Recommended practice for stopping transactions escalating to distributed when using transactionscope

Using the TransactionScope object to set up an implicit transaction that doesn't need to be passed across function calls is great! However, if a connection is opened whilst another is already open, the transaction coordinator silently escalates the transaction to be distributed (needing MSDTC service to be running and taking up much more resources and time).
So, this is fine:
using (var ts = new TransactionScope())
{
using (var c = DatabaseManager.GetOpenConnection())
{
// Do Work
}
using (var c = DatabaseManager.GetOpenConnection())
{
// Do more work in same transaction using different connection
}
ts.Complete();
}
But this escalates the transaction:
using (var ts = new TransactionScope())
{
using (var c = DatabaseManager.GetOpenConnection())
{
// Do Work
using (var nestedConnection = DatabaseManager.GetOpenConnection())
{
// Do more work in same transaction using different nested connection - escalated transaction to distributed
}
}
ts.Complete();
}
Is there a recommended practise to avoid escalating transactions in this way, whilst still using nested connections?
The best I can come up with at the moment is having a ThreadStatic connection and reusing that if Transaction.Current is set, like so:
public static class DatabaseManager
{
private const string _connectionString = "data source=.\\sql2008; initial catalog=test; integrated security=true";
[ThreadStatic]
private static SqlConnection _transactionConnection;
[ThreadStatic] private static int _connectionNesting;
private static SqlConnection GetTransactionConnection()
{
if (_transactionConnection == null)
{
Transaction.Current.TransactionCompleted += ((s, e) =>
{
_connectionNesting = 0;
if (_transactionConnection != null)
{
_transactionConnection.Dispose();
_transactionConnection = null;
}
});
_transactionConnection = new SqlConnection(_connectionString);
_transactionConnection.Disposed += ((s, e) =>
{
if (Transaction.Current != null)
{
_connectionNesting--;
if (_connectionNesting > 0)
{
// Since connection is nested and same as parent, need to keep it open as parent is not expecting it to be closed!
_transactionConnection.ConnectionString = _connectionString;
_transactionConnection.Open();
}
else
{
// Can forget transaction connection and spin up a new one next time one's asked for inside this transaction
_transactionConnection = null;
}
}
});
}
return _transactionConnection;
}
public static SqlConnection GetOpenConnection()
{
SqlConnection connection;
if (Transaction.Current != null)
{
connection = GetTransactionConnection();
_connectionNesting++;
}
else
{
connection = new SqlConnection(_connectionString);
}
if (connection.State != ConnectionState.Open)
{
connection.Open();
}
return connection;
}
}
Edit: So, if the answer is to reuse the same connection when it's nested inside a transactionscope, as the code above does, I wonder about the implications of disposing of this connection mid-transaction.
So far as I can see (using Reflector to examine code), the connection's settings (connection string etc.) are reset and the connection is closed. So (in theory), re-setting the connection string and opening the connection on subsequent calls should "reuse" the connection and prevent escalation (and my initial testing agrees with this).
It does seem a little hacky though... and I'm sure there must be a best-practise somewhere that states that one should not continue to use an object after it's been disposed of!
However, since I cannot subclass the sealed SqlConnection, and want to maintain my transaction-agnostic connection-pool-friendly methods, I struggle (but would be delighted) to see a better way.
Also, realised that I could force non-nested connections by throwing exception if application code attempts to open nested connection (which in most cases is unnecessary, in our codebase)
public static class DatabaseManager
{
private const string _connectionString = "data source=.\\sql2008; initial catalog=test; integrated security=true; enlist=true;Application Name='jimmy'";
[ThreadStatic]
private static bool _transactionHooked;
[ThreadStatic]
private static bool _openConnection;
public static SqlConnection GetOpenConnection()
{
var connection = new SqlConnection(_connectionString);
if (Transaction.Current != null)
{
if (_openConnection)
{
throw new ApplicationException("Nested connections in transaction not allowed");
}
_openConnection = true;
connection.Disposed += ((s, e) => _openConnection = false);
if (!_transactionHooked)
{
Transaction.Current.TransactionCompleted += ((s, e) =>
{
_openConnection = false;
_transactionHooked = false;
});
_transactionHooked = true;
}
}
connection.Open();
return connection;
}
}
Would still value a less hacky solution :)
One of the primary reasons for transaction escalation is when you have multiple (different) connections involved in a transaction. This almost always escalates to a distributed transaction. And it is indeed a pain.
This is why we make sure that all our transactions use a single connection object. There are several ways to do this. For the most part, we use the thread static object to store a connection object, and our classes that do the database persistance work, use the thread static connection object (which is shared of course). This prevents multiple connections objects from being used and has eliminated transaction escalation. You could also achieve this by simply passing a connection object from method to method, but this isn't as clean, IMO.

TransactionScope not rolling back transaction

Here is the current architecture of my transaction scope source code. The third insert throws an .NET exception (Not a SQL Exception) and it is not rolling back the two previous insert statements. What I am doing wrong?
EDIT: I removed the try/catch from insert2 and insert3. I also removed the exception handling utility from the insert1 try/catch and put "throw ex". It still does not rollback the transaction.
EDIT 2: I added the try/catch back on the Insert3 method and just put a "throw" in the catch statement. It still does not rollback the transaction.
UPDATE:Based on the feedback I received, the "SqlHelper" class is using the SqlConnection object to establish a connection to the database, then creates a SqlCommand object, set the CommandType property to "StoredProcedure" and calls the ExecuteNonQuery method of the SqlCommand.
I also did not add Transaction Binding=Explicit Unbind to the current connection string. I will add that during my next test.
public void InsertStuff()
{
try
{
using(TransactionScope ts = new TransactionScope())
{
//perform insert 1
using(SqlHelper sh = new SqlHelper())
{
SqlParameter[] sp = { /* create parameters for first insert */ };
sh.Insert("MyInsert1", sp);
}
//perform insert 2
this.Insert2();
//perform insert 3 - breaks here!!!!!
this.Insert3();
ts.Complete();
}
}
catch(Exception ex)
{
throw ex;
}
}
public void Insert2()
{
//perform insert 2
using(SqlHelper sh = new SqlHelper())
{
SqlParameter[] sp = { /* create parameters for second insert */ };
sh.Insert("MyInsert2", sp);
}
}
public void Insert3()
{
//perform insert 3
using(SqlHelper sh = new SqlHelper())
{
SqlParameter[] sp = { /*create parameters for third insert */ };
sh.Insert("MyInsert3", sp);
}
}
I have also run into a similar issue. My problem occurred because the SqlConnection I used in my SqlCommands was already open before the TransactionScope was created, so it never got enlisted in the TransactionScope as a transaction.
Is it possible that the SqlHelper class is reusing an instance of SqlConnection that is open before you enter your TransactionScope block?
It looks like you are catching the exception in Insert3() so your code continues after the call. If you want it to rollback you'll need to let the exception bubble up to the try/catch block in the main routine so that the ts.Complete() statement never gets called.
An implicit rollback will only occur if the using is exited without calling ts.complete. Because you are handling the exception in Insert3() the exception never causes an the using statement to exit.
Either rethrow the exception or notify the caller that a rollback is needed (make change the signature of Insert3() to bool Insert3()?)
(based on the edited version that doesn't swallow exceptions)
How long do the operations take? If any of them are very long running, it is possible that the Transaction Binding bug feature has bitten you - i.e. the connection has become detached. Try adding Transaction Binding=Explicit Unbind to the connection string.
I dont see your helper class, but transaction scope rollsback if you don't call complete statement even if you get error from .NET code. I copied one example for you. You may be doing something wrong in debugging. This example has error in .net code and similar catch block as yours.
private static readonly string _connectionString = ConnectionString.GetDbConnection();
private const string inserttStr = #"INSERT INTO dbo.testTable (col1) VALUES(#test);";
/// <summary>
/// Execute command on DBMS.
/// </summary>
/// <param name="command">Command to execute.</param>
private void ExecuteNonQuery(IDbCommand command)
{
if (command == null)
throw new ArgumentNullException("Parameter 'command' can't be null!");
using (IDbConnection connection = new SqlConnection(_connectionString))
{
command.Connection = connection;
connection.Open();
command.ExecuteNonQuery();
}
}
public void FirstMethod()
{
IDbCommand command = new SqlCommand(inserttStr);
command.Parameters.Add(new SqlParameter("#test", "Hello1"));
ExecuteNonQuery(command);
}
public void SecondMethod()
{
IDbCommand command = new SqlCommand(inserttStr);
command.Parameters.Add(new SqlParameter("#test", "Hello2"));
ExecuteNonQuery(command);
}
public void ThirdMethodCauseNetException()
{
IDbCommand command = new SqlCommand(inserttStr);
command.Parameters.Add(new SqlParameter("#test", "Hello3"));
ExecuteNonQuery(command);
int a = 0;
int b = 1/a;
}
public void MainWrap()
{
TransactionOptions tso = new TransactionOptions();
tso.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted;
//TransactionScopeOption.Required, tso
try
{
using (TransactionScope sc = new TransactionScope())
{
FirstMethod();
SecondMethod();
ThirdMethodCauseNetException();
sc.Complete();
}
}
catch (Exception ex)
{
logger.ErrorException("eee ",ex);
}
}
If you want to debug your transactions, you can use this script to see locks and waiting status etc.
SELECT
request_session_id AS spid,
CASE transaction_isolation_level
WHEN 0 THEN 'Unspecified'
WHEN 1 THEN 'ReadUncomitted'
WHEN 2 THEN 'Readcomitted'
WHEN 3 THEN 'Repeatable'
WHEN 4 THEN 'Serializable'
WHEN 5 THEN 'Snapshot' END AS TRANSACTION_ISOLATION_LEVEL ,
resource_type AS restype,
resource_database_id AS dbid,
DB_NAME(resource_database_id) as DBNAME,
resource_description AS res,
resource_associated_entity_id AS resid,
CASE
when resource_type = 'OBJECT' then OBJECT_NAME( resource_associated_entity_id)
ELSE 'N/A'
END as ObjectName,
request_mode AS mode,
request_status AS status
FROM sys.dm_tran_locks l
left join sys.dm_exec_sessions s on l.request_session_id = s.session_id
where resource_database_id = 24
order by spid, restype, dbname;
You will see one SPID for two method calls before calling exception method.
Default isolation level is serializable.You can read more about locks and transactions here
I ran into a similar issue when I had a call to a WCF service operation in TransactionScope.
I noticed transaction flow was not allowed due to the 'TransactionFlow' attribute in the service interface. Therefore, the WCF service operation was not using the transaction used by the outer transaction scope. Changing it to allow transaction flow as shown below fixed my problem.
[TransactionFlow(TransactionFlowOption.NotAllowed)]
to
[TransactionFlow(TransactionFlowOption.Allowed)]

Categories

Resources