Suppose my project is like .net petshop.
It has a BLL, DAL and SQLHelper.
Normally, I call a BLL function in my web layer, and the BLL function calls the DAL function and finally, the DAL call the sqlhelper.
But in some situations, I nedd a transaction.
For example:
Web layer:
I need Call some BLL functions.
Code as below:
var m = BLLFunction_1();
var n= BLLFunction_2();
if (m+n<100)
{
// need rollback here
}
else
{
BLLFunction_3();
// commit here
}
So it makes me have to use a transaction object in the web layer, to pass it into the BLL function, and BLL layer pass it into DAL layer, and finally pass it into SQLHelper.
That's a little ugly.
I wonder what is a elegant methed to this situation.
I am assuming you are looking for Transaction in ADO.NET.
Basically you need to wrap your "actions" into a TransactionScope.
try
{
using(TransactionScope ts = new TransactionScope())
{
//perform SQL
using(SqlHelper sh = new SqlHelper())
{
//do stuff
}
//call new DAL function
//call other DAL function
ts.Complete();
}
}
catch(Exception ex)
{
throw ex;
}
Hi create Transaction in your BLL functions with TransactionScopeOption Required
public void BLLFunction_1()
{
using(TransactionScope ts = new TransactionScope(TransactionScopeOption.Required))
{
//do ur stuff here
ts.Complete();
}
}
public void BLLFunction_2()
{
using(TransactionScope ts = new TransactionScope(TransactionScopeOption.Required))
{
//do ur stuff here
ts.Complete();
}
}
With TransactionScopeOption Required : A transaction is required by the scope. It uses an ambient transaction if one already exists. Otherwise, it creates a new transaction before entering the scope. This is the default value. So here your BLLFunction_2 will use the Transaction of BLLFunction_1 instead of creating new.
Related
Considering this piece of code:
using(TransactionScope tran = new TransactionScope()) {
insertStatementMethod1();
insertStatementMethod2();
// this might fail
try {
insertStatementMethod3();
} catch (Exception e) {
// nothing to do
}
tran.Complete();
}
Is anything done in insertStatementMethod1 and insertStatementMethod2 going to be rolled back? In any case?
If I want them to execute anyway, I would need to check if it insertStatementMethod3 will fail before the transaction, and build my transaction code based on that?
Update
The code looks similar to this
using(TransactionScope tran = new TransactionScope()) {
// <standard code>
yourExtraCode();
// <standard code>
tran.Complete();
}
where I get to write the yourExtraCode() method
public void yourExtraCode() {
insertStatementMethod1();
insertStatementMethod2();
// this call might fail
insertStatementMethod3();
}
I can only edit the yourExtraCode() method, so I cannot chose to be in the transaction scope or no. One simple possible solution would be this:
public void yourExtraCode() {
insertStatementMethod1();
insertStatementMethod2();
// this call might fail
if (findOutIfIcanInsert()) { // <-- this would come by executing sql query
try {
insertStatementMethod3();
} catch (Exception e) {
// nothing to do
}
}
}
But that would come with the need of looking up things in the db which would affect performance.
Is there a better way, or I need to find out before I'd call the method?
I tried out and, of course the transaction was rolled back as expected.
If you don't want your first two methods to be transacted, just move them out from the ambient transaction's scope.
If you don't have control over the code which starts an ambient transaction, you can suppress it by creating a new ambient transaction: using (var scope = new TransactionScope(TransactionScopeOption.Suppress)).
I have multiple methods inside a Parallel.Invoke() that need to run inside of a transaction. These methods all invoke instances of SqlBulkCopy The use-case is "all-or-none", so if one method fails nothing gets committed. I am getting a TransactionAbortedException ({"Transaction Timeout"}) when I call the Complete() method on the parent transaction.
This is the parent transaction:
using (var ts = new TransactionScope())
{
var saveClone = Transaction.Current.DependentClone(DependentCloneOption.BlockCommitUntilComplete);
var saveErrorsClone = Transaction.Current.DependentClone(DependentCloneOption.BlockCommitUntilComplete);
var saveADClone = Transaction.Current.DependentClone(DependentCloneOption.BlockCommitUntilComplete);
var saveEnrollmentsClone = Transaction.Current.DependentClone(DependentCloneOption.BlockCommitUntilComplete);
Parallel.Invoke(_options, () =>
{
Save(data, saveClone);
},
() =>
{
SaveErrors(saveErrorsClone);
},
() =>
{
SaveEnrollments(data, saveEnrollmentsClone);
});
ts.Complete();
}//***** GET THE EXCEPTION HERE *****
Here's a dependent transaction that makes use of SqlBulkCopy (they're all the same structure). I'm passing-in the parent and assigning it to the child's TransactionScope
private void Save(IDictionary<string, string> data, Transaction transaction)
{
var dTs = (DependentTransaction)transaction;
if (transaction.TransactionInformation.Status != TransactionStatus.Aborted)
{
using (var ts = new TransactionScope(dTs))
{
_walmartData.Save(data);
Debug.WriteLine("Completed Processing XML - {0}", _stopWatch.Elapsed);
ts.Complete();
}
}
else
{
Debug.WriteLine("Save Not Executed - Transaction Aborted - {0}", _stopWatch.Elapsed);
dTs.Complete();
}
dTs.Complete();
}
EDIT (added my SqlBulkCopy method...notice null for the transaction param)
private void SqlBulkCopy(DataTable dt, SqlBulkCopyColumnMappingCollection mappings)
{
try
{
using (var sbc = new SqlBulkCopy(_conn, SqlBulkCopyOptions.TableLock, null))
{
sbc.BatchSize = 100;
sbc.BulkCopyTimeout = 0;
sbc.DestinationTableName = dt.TableName;
foreach (SqlBulkCopyColumnMapping mapping in mappings)
{
sbc.ColumnMappings.Add(mapping);
}
sbc.WriteToServer(dt);
}
}
catch (Exception)
{
throw;
}
}
Besides fixing the error, I'm open to alternatives. Thanks.
You're creating a form of deadlock with your choice of DependentCloneOption.BlockCommitUntilComplete.
Parallel.Invoke blocks the calling thread until all of its processing is complete. The jobs trying to be completed by Parallel.Invoke are all blocking while waiting for the parent transaction to complete (due to the DependentCloneOption). So the 2 are waiting on each other... deadlock. The parent transaction eventually times out and releases the dependent transactions from blocking, which unblocks your calling thread.
Can you use DependentCloneOption.RollbackIfNotComplete ?
http://msdn.microsoft.com/en-us/library/system.transactions.transactionscope.complete.aspx says that TransactionScope.Complete only commits the transaction it contains if it was the one that created it. Since you are creating the scope from an existing transaction I believe you will need to commit the transaction before calling complete on the scope.
From MSDN:
The actual work of commit between the resources manager happens at the
End Using statement if the TransactionScope object created the
transaction. If it did not create the transaction, the commit occurs
whenever Commit is called by the owner of the CommittableTransaction
object. At that point the Transaction Manager calls the resource
managers and informs them to either commit or rollback, based on
whether this method was called on the TransactionScope object
.
After a lot of pain, research, and lack of a valid answer, I've got to believe that it's not possible with the stack that I described in my question. The pain-point, I believe, is between TransactionScope and SqlBulkCopy. I put this answer here for the benefit of future viewers. If someone can prove that it can be done, I'll gladly remove this as the answer.
I believe that how you create your _conn-instance matters a lot, if you create it and open it within your TransactionScope-instance any SqlBulkCopy-related issues should be solved.
Have a look at Can I use SqlBulkCopy inside Transaction and Is it possible to use System.Transactions.TransactionScope with SqlBulkCopy? and see if it helps you.
void MyMainMethod()
{
using (var ts = new TransactionScope())
{
Parallell.InvokeOrWhatNotOrWhatEver(() => DoStuff());
}
}
void DoStuff()
{
using (var sqlCon = new SqlConnection(conStr))
{
sqlCon.Open(); // ensure to open it before SqlBulkCopy can open it in another transactionscope.
using (var bulk = new SqlBulkCopy(sqlCon))
{
// Do you stuff
bulk.WriteToServer...
}
ts.Complete(); // finish the transaction, ie commit
}
}
In short:
Create transaction scope
Create sql-connection and open it under the transaction scope
Create and use SqlBulkCopy-instance with above created conncection
Call transaction.Complete()
Dispose of everything :-)
I'm searching for design patter that could implement some prolog code and then epilog code.
Let me explain:
I have an function (a lot of them) that amost do the same thing:
this is presudo code but actually it's written in C# 4.5
public IDatabaseError GetUserByName(string Name)
{
try
{
//Initialize session to database
}
catch (Exception)
{
// return error with description for this step
}
try
{
// Try to create 'transaction' object
}
catch(Exception)
{
// return error with description about this step
}
try
{
// Execute call to database with session and transaction object
//
// Actually in all function only this section of the code is different
//
}
catch(Exception)
{
// Transaction object rollback
// Return error with description for this step
}
finally
{
// Close session to database
}
return everything-is-ok
}
So - as you can see 'prolog' (Create session, transaction, other helper function) and 'epilog' (close session, rollback transaction, clean memeory, etc..) is the same for all functions.
Some restrictions:
I want to keep session and transaction object creation/destruction process in function and not in ctor
Custom code (that running in the middle) must be wrapped in try/catch and return different error for different situation
I'm open for any Func<>, Action<> preferable Task<> functions suggestions
Any ideas for design patter or code refactoring ?
This can be achieved by using IDisposable objects as for example:
using(var uow = new UnitOfWork() )
using(var t = new TransactionScope() )
{
//query the database and throws exceptions
// in case of errors
}
Please nothe the TransactionScope class is an out-of-the box class you have in System.Transaction that works ( not only ) with DB connections.
In the UnitOfWork constructor do the "Prologue" code ( ie open the connection... ), in the Dispose do the epilogue part. By throwing exception when error occours you are sure the epilogue part is called anyway.
It sounds like you're looking for the Template Method Pattern.
The template method pattern will allow you to reduce the amount of duplicated code in similar methods by extracting out only the parts of the method which are different.
For this particular example, you could write a method that does all the grunt work, and then invokes a callback to do the interesting work...
// THIS PART ONLY WRITTEN ONCE
public class Database
{
// This is the template method - it only needs to be written once, so the prolog and epilog only exist in this method...
public static IDatabaseError ExecuteQuery(Action<ISession> queryCallback)
{
try
{
//Initialize session to database
}
catch (Exception)
{
// return error with description for this step
}
try
{
// Try to create 'transaction' object
}
catch(Exception)
{
// return error with description about this step
}
try
{
// Execute call to database with session and transaction object
//
// Actually in all function only this section of the code is different
//
var session = the session which was set up at the start of this method...
queryCallback(session);
}
catch(Exception)
{
// Transaction object rollback
// Return error with description for this step
}
finally
{
// Close session to database
}
return everything-is-ok
}
}
This is the usage:
// THIS PART WRITTEN MANY TIMES
IDatabaseError error = Database.ExecuteQuery(session =>
{
// do your unique thing with the database here - no need to write the prolog / epilog...
// you can use the session variable - it was set up by the template method...
// you can throw an exception, it will be converted to IDatabaseError by the template method...
});
if (error != null)
// something bad happened!
I hope I have explained better this time :)
I have a user repository, which does all the user data access. I also have a unit of work class that manages the connection and transaction for my repositories. How do I effectively rollback a transaction on my unit of work class, if an error happens within my repository?
Create method on my UserRepository. I'm using Dapper for DataAccess.
try
{
this.Connection.Execute("User_Create", parameters, this.Transaction,
commandType: CommandType.StoredProcedure);
}
catch (Exception)
{
//Need to tell my unit of work to rollback the transaction.
}
I pass both the connection and transaction that were created in my unit of work constructor to my repositories. Below is a property on my unit of work class.
public UserRepository UserRepository
{
get
{
if (this._userRepository == null)
this._userRepository =
new UserRepository(this._connection, this._transaction);
return this._userRepository;
}
}
I'm hoping to figure out the best approach.
* Update *
After doing more research into the unit of work pattern I think I am using it completely wrong in my example.
Dapper supports TransactionScope, which provides a Complete() method to commit the transaction, if you don't call Complete() the transaction is aborted.
using (TransactionScope scope = new TransactionScope())
{
//open connection, do your thing
scope.Complete();
}
I see there are two main options for managing transactions with llblgen.
Method 1:
using(DataAccessAdapter adapter = new DataAccessAdapter())
{
adapter.StartTransaction(IsolationLevel.ReadCommitted, "TR");
try
{
// ...
adapter.Commit();
}
catch
{
adapter.Rollback();
throw;
}
}
Method 2:
using(TransactionScope scope = new TransactionScope())
{
// ...
scope.Complete();
}
What is your prefered method and why? (I'm using adapapter/2.6 .net/3.5)
I would lean towards using TransactionScope for managing transactions as this is what it was designed for whereas the DataAccessAdapter, while it has the ability to create transactions is designed primarily for DataAccess.
To try and be a little clearer, you could use TransactionScope to manage multiple transactions across multiple DataAccessAdapters whilst a single DataAccessAdapter appears to have a specific scope.
For example:
using(TransactionScope ts = new TransactionScope())
{
using(DataAccessAdapter d1 = new DataAccessAdapter())
{
//do some data access stuff
}
using(DataAccessAdapter d2 = new DataAccessAdapter())
{
//do some other data access stuff
}
ts.complete();
}
Another side note is that TransactionScope is thread safe, where as DataAdapters are not.