I'm trying to work with some ambient transaction scopes (thanks, entity-framework), which I haven't really done before, and I'm seeing some ... odd behavior that I'm trying to understand.
I'm trying to enlist in the current transaction scope and do some work after it completes successfully. My enlistment participant implements IDisposable due to some resources it holds. I've got a simple example that exhibits the strange behavior.
For this class,
class WtfTransactionScope : IDisposable, IEnlistmentNotification
{
public WtfTransactionScope()
{
if(Transaction.Current == null)
return;
Transaction.Current.EnlistVolatile(this, EnlistmentOptions.None);
}
void IEnlistmentNotification.Commit(Enlistment enlistment)
{
enlistment.Done();
Console.WriteLine("Committed");
}
void IEnlistmentNotification.InDoubt(Enlistment enlistment)
{
enlistment.Done();
Console.WriteLine("InDoubt");
}
void IEnlistmentNotification.Prepare(
PreparingEnlistment preparingEnlistment)
{
Console.WriteLine("Prepare called");
preparingEnlistment.Prepared();
Console.WriteLine("Prepare completed");
}
void IEnlistmentNotification.Rollback(Enlistment enlistment)
{
enlistment.Done();
Console.WriteLine("Rolled back");
}
public void Dispose()
{
Console.WriteLine("Disposed");
}
}
when used as illustrated here
using(var scope = new TransactionScope())
using(new WtfTransactionScope())
{
scope.Complete();
}
the console output demonstrates the wtf-ness:
Disposed
Prepare called
Committed
Prepare completed
wut.
I'm getting disposed before the transaction completes. This... kind of negates the benefits of hanging with the transaction scope. I was hoping that I'd be informed once the transaction completes successfully (or not) so I could do some work. I was unfortunately assuming that this would happen after scope.Complete() and before I get disposed as we move out of the using scope. This apparently is not the case.
Of course, I could hack it. But I've got other issues which essentially prevent me from doing this. I'll have to scrap and do something else in this eventuality.
Am I doing something wrong here? Or is this expected behavior? Can something be done differently to prevent this from happening??
This is self-inflicted pain. You violate a very basic rule for IDisposable, an object should only ever be disposed when it is no longer in use anywhere else. It is in use when your using statement calls Dispose(), you handed a reference to your object in the WtfTransactionScope constructor. You cannot dispose it until it is done with it, that necessarily means you have to dispose it after the using statement for the TransactionScope completes and the transaction got committed/rolled-back.
I'll let you fret about making it pretty, but an obvious way is:
using(var wtf = new WtfTransactionScope())
using(var scope = new TransactionScope())
{
wtf.Initialize();
scope.Complete();
}
The Prepare completed is merely an unhelpful debug statement. Just delete it.
Related
This issue has been perplexing me for quite a while. I'm trying to dispose of a connection to a database, but it keeps leaking no matter how I try to dispose of it. Worse yet, it's supposed to dispose automatically.
Long story short, I have a HostedService that starts with the following code:
public async Task StartAsync(CancellationToken cancellationToken)
{
using (var scope = _serviceProvider.CreateScope())
{
_logger.LogDebug($"Инициализация операторов");
using (var operatorRepo = scope.ServiceProvider.GetService<IOperatorRepository>()) {
var availableOperators = operatorRepo.GetAllOperatorsWithSkills();
_logger.LogDebug($"Инициализовано {availableOperators.Count()} операторов");
foreach (OperatorRow opRow in availableOperators)
{
OperatorMeta opMeta = new(opRow.Idr, opRow.Name, opRow.Number, opRow.OperatorSkills.Select(skill => (skill.SkillId, skill.Experience)));
if (!TryAddOperator(opMeta))
throw new ArgumentException($"Оператор с id {opRow.Idr} уже был добавлен!");
}
if (Convert.ToBoolean(_configuration.GetSection("isOperatorTesting")?.Value))
{
_globalOperatorPool.TryAdd(
-1,
new CampaignOperator(
new OperatorMeta(-1, "Test operator", _configuration.GetSection("testOperatorNumber").Value, Enumerable.Empty<(int, int)>()),
OperatorState.Unassigned)
);
}
}
}
}
I have tried to wrap the insides of the using in a try finally blocks just in case but I honestly don't see how that would be an issue because the using isn't inside of one. Still, connection doesn't close.
The underlying class of IOperatorRepository extends from a chain of two classes which extend from LinqToDB.DataConnection. It's the only IDisposable in the chain.
Honestly, I'm stumped. Any help would be appreciated.
Edit: I feel like I should specify that the connection class does indeed return ObjectDisposedException, but the connection stays open in the database, and more are made constantly as if they're not being reused.
I've got a problem with making calls to a third-party C++ dll which I've wrapped in a class using DllImport to access its functions.
The dll demands that before use a session is opened, which returns an integer handle used to refer to that session when performing operations. When finished, one must close the session using the same handle. So I did something like this:
public void DoWork(string input)
{
int apiHandle = DllWrapper.StartSession();
try
{
// do work using the apiHandle
}
catch(ApplicationException ex)
{
// log the error
}
finally
{
DllWrapper.CloseSession(apiHandle);
}
}
The problem I have is that CloseSession() sometimes causes the Dll in question to throw an error when running threaded:
System.AggregateException: One or more errors occurred. --->
System.AccessViolationException: Attempted to read or write protected
memory. This is often an indication that other memory is corrupt.
I'm not sure there's much I can do about stopping this error, since it seems to be arising from using the Dll in a threaded manner - it is supposed to be thread safe. But since my CloseSession() function does nothing except call that Dll's close function, there's not much wiggle room for me to "fix" anything.
The end result, however, is that the session doesn't close properly. So when the process tries again, which it's supposed to do, it encounters an open session and just keeps throwing new errors. That session absolutely has to be closed.
I'm at a loss as to how to design an error handling statement that's more robust any will ensure the session always closes?
I would change the wrapper to include disposal of the external resource and to also wrap the handle. I.e. instead of representing a session by a handle, you would represent it by a wrapper object.
Additionally, wrapping the calls to the DLL in lock-statements (as #Serge suggests), could prevent the multithreading issues completely. Note that the lock object is static, so that all DllWrappers are using the same lock object.
public class DllWrapper : IDisposable
{
private static object _lockObject = new object();
private int _apiHandle;
private bool _isOpen;
public void StartSession()
{
lock (_lockObject) {
_apiHandle = ...; // TODO: open the session
}
_isOpen = true;
}
public void CloseSession()
{
const int MaxTries = 10;
for (int i = 0; _isOpen && i < MaxTries; i++) {
try {
lock (_lockObject) {
// TODO: close the session
}
_isOpen = false;
} catch {
}
}
}
public void Dispose()
{
CloseSession();
}
}
Note that the methods are instance methods, now.
Now you can ensure the closing of the session with a using statement:
using (var session = new DllWrapper()) {
try {
session.StartSession();
// TODO: work with the session
} catch(ApplicationException ex) {
// TODO: log the error
// This is for exceptions not related to closing the session. If such exceptions
// cannot occur, you can drop the try-catch completely.
}
} // Closes the session automatically by calling `Dispose()`.
You can improve naming by calling this class Session and the methods Open and Close. The user of this class does not need to know that it is a wrapper. This is just an implementation detail. Also, the naming of the methods is now symmetrical and there is no need to repeat the name Session.
By encapsulating all the session related stuff, including error handling, recovery from error situations and disposal of resources, you can considerably diminish the mess in your code. The Session class is now a high-level abstraction. The old DllWrapper was somewhere at mid distance between low-level and high-level.
Considering this piece of code:
using(TransactionScope tran = new TransactionScope()) {
insertStatementMethod1();
insertStatementMethod2();
// this might fail
try {
insertStatementMethod3();
} catch (Exception e) {
// nothing to do
}
tran.Complete();
}
Is anything done in insertStatementMethod1 and insertStatementMethod2 going to be rolled back? In any case?
If I want them to execute anyway, I would need to check if it insertStatementMethod3 will fail before the transaction, and build my transaction code based on that?
Update
The code looks similar to this
using(TransactionScope tran = new TransactionScope()) {
// <standard code>
yourExtraCode();
// <standard code>
tran.Complete();
}
where I get to write the yourExtraCode() method
public void yourExtraCode() {
insertStatementMethod1();
insertStatementMethod2();
// this call might fail
insertStatementMethod3();
}
I can only edit the yourExtraCode() method, so I cannot chose to be in the transaction scope or no. One simple possible solution would be this:
public void yourExtraCode() {
insertStatementMethod1();
insertStatementMethod2();
// this call might fail
if (findOutIfIcanInsert()) { // <-- this would come by executing sql query
try {
insertStatementMethod3();
} catch (Exception e) {
// nothing to do
}
}
}
But that would come with the need of looking up things in the db which would affect performance.
Is there a better way, or I need to find out before I'd call the method?
I tried out and, of course the transaction was rolled back as expected.
If you don't want your first two methods to be transacted, just move them out from the ambient transaction's scope.
If you don't have control over the code which starts an ambient transaction, you can suppress it by creating a new ambient transaction: using (var scope = new TransactionScope(TransactionScopeOption.Suppress)).
I have multiple methods inside a Parallel.Invoke() that need to run inside of a transaction. These methods all invoke instances of SqlBulkCopy The use-case is "all-or-none", so if one method fails nothing gets committed. I am getting a TransactionAbortedException ({"Transaction Timeout"}) when I call the Complete() method on the parent transaction.
This is the parent transaction:
using (var ts = new TransactionScope())
{
var saveClone = Transaction.Current.DependentClone(DependentCloneOption.BlockCommitUntilComplete);
var saveErrorsClone = Transaction.Current.DependentClone(DependentCloneOption.BlockCommitUntilComplete);
var saveADClone = Transaction.Current.DependentClone(DependentCloneOption.BlockCommitUntilComplete);
var saveEnrollmentsClone = Transaction.Current.DependentClone(DependentCloneOption.BlockCommitUntilComplete);
Parallel.Invoke(_options, () =>
{
Save(data, saveClone);
},
() =>
{
SaveErrors(saveErrorsClone);
},
() =>
{
SaveEnrollments(data, saveEnrollmentsClone);
});
ts.Complete();
}//***** GET THE EXCEPTION HERE *****
Here's a dependent transaction that makes use of SqlBulkCopy (they're all the same structure). I'm passing-in the parent and assigning it to the child's TransactionScope
private void Save(IDictionary<string, string> data, Transaction transaction)
{
var dTs = (DependentTransaction)transaction;
if (transaction.TransactionInformation.Status != TransactionStatus.Aborted)
{
using (var ts = new TransactionScope(dTs))
{
_walmartData.Save(data);
Debug.WriteLine("Completed Processing XML - {0}", _stopWatch.Elapsed);
ts.Complete();
}
}
else
{
Debug.WriteLine("Save Not Executed - Transaction Aborted - {0}", _stopWatch.Elapsed);
dTs.Complete();
}
dTs.Complete();
}
EDIT (added my SqlBulkCopy method...notice null for the transaction param)
private void SqlBulkCopy(DataTable dt, SqlBulkCopyColumnMappingCollection mappings)
{
try
{
using (var sbc = new SqlBulkCopy(_conn, SqlBulkCopyOptions.TableLock, null))
{
sbc.BatchSize = 100;
sbc.BulkCopyTimeout = 0;
sbc.DestinationTableName = dt.TableName;
foreach (SqlBulkCopyColumnMapping mapping in mappings)
{
sbc.ColumnMappings.Add(mapping);
}
sbc.WriteToServer(dt);
}
}
catch (Exception)
{
throw;
}
}
Besides fixing the error, I'm open to alternatives. Thanks.
You're creating a form of deadlock with your choice of DependentCloneOption.BlockCommitUntilComplete.
Parallel.Invoke blocks the calling thread until all of its processing is complete. The jobs trying to be completed by Parallel.Invoke are all blocking while waiting for the parent transaction to complete (due to the DependentCloneOption). So the 2 are waiting on each other... deadlock. The parent transaction eventually times out and releases the dependent transactions from blocking, which unblocks your calling thread.
Can you use DependentCloneOption.RollbackIfNotComplete ?
http://msdn.microsoft.com/en-us/library/system.transactions.transactionscope.complete.aspx says that TransactionScope.Complete only commits the transaction it contains if it was the one that created it. Since you are creating the scope from an existing transaction I believe you will need to commit the transaction before calling complete on the scope.
From MSDN:
The actual work of commit between the resources manager happens at the
End Using statement if the TransactionScope object created the
transaction. If it did not create the transaction, the commit occurs
whenever Commit is called by the owner of the CommittableTransaction
object. At that point the Transaction Manager calls the resource
managers and informs them to either commit or rollback, based on
whether this method was called on the TransactionScope object
.
After a lot of pain, research, and lack of a valid answer, I've got to believe that it's not possible with the stack that I described in my question. The pain-point, I believe, is between TransactionScope and SqlBulkCopy. I put this answer here for the benefit of future viewers. If someone can prove that it can be done, I'll gladly remove this as the answer.
I believe that how you create your _conn-instance matters a lot, if you create it and open it within your TransactionScope-instance any SqlBulkCopy-related issues should be solved.
Have a look at Can I use SqlBulkCopy inside Transaction and Is it possible to use System.Transactions.TransactionScope with SqlBulkCopy? and see if it helps you.
void MyMainMethod()
{
using (var ts = new TransactionScope())
{
Parallell.InvokeOrWhatNotOrWhatEver(() => DoStuff());
}
}
void DoStuff()
{
using (var sqlCon = new SqlConnection(conStr))
{
sqlCon.Open(); // ensure to open it before SqlBulkCopy can open it in another transactionscope.
using (var bulk = new SqlBulkCopy(sqlCon))
{
// Do you stuff
bulk.WriteToServer...
}
ts.Complete(); // finish the transaction, ie commit
}
}
In short:
Create transaction scope
Create sql-connection and open it under the transaction scope
Create and use SqlBulkCopy-instance with above created conncection
Call transaction.Complete()
Dispose of everything :-)
we have a quite large winforms desktop application. Our application runs every once in a while into a deadlock and we are not sure how this happens.
We do know that this is caused by a locking operation. So we have quite a lot code parts like this:
lock (_someObj)
DoThreadSaveOperation();
Our approach to be able to detect what the deadlock was caused by we want to convert all those lock operations into something like this:
bool lockTaken = false;
var temp = _someObj;
try {
System.Threading.Monitor.TryEnter(temp, 1000, ref lockTaken);
if (!lockTaken)
{
// log "can't get lock, maybe deadlock, print stacktrace
}
DoThreadSaveOperation();
}
finally {
System.Threading.Monitor.Exit(temp);
}
This "lock service" should be at a central position. The problem is that it then has to be called like this:
LockService.RunWithLock(object objToLock, Action methodToRun);
That would mean that we had to create a delegate function for each statement which is executed after a lock.
Since this would be a lot of refactoring, I thought I'd give a try on stackoverflow if you guys have a better/good idea about this and also ask for your opinion.
Thanks for you help =)
Since the existing lock functionality closely models a using statement, I suggest that you wrap up your logic in a class that implements IDisposable.
The class's constructor would attempt to get the lock, and if it failed to get the lock you could either throw an exception or log it. The Dispose() method would release the lock.
You would use it in a using statement so it will be robust in the face of exceptions.
So something like this:
public sealed class Locker: IDisposable
{
readonly object _lockObject;
readonly bool _wasLockAcquired;
public Locker(object lockObject, TimeSpan timeout)
{
_lockObject = lockObject;
Monitor.TryEnter(_lockObject, timeout, ref _wasLockAcquired);
// Throw if lock wasn't acquired?
}
public bool WasLockAquired
{
get
{
return _wasLockAcquired;
}
}
public void Dispose()
{
if (_wasLockAcquired)
Monitor.Exit(_lockObject);
}
}
Which you could use like this:
using (var locker = new Locker(someObj, TimeSpan.FromSeconds(1)))
{
if (locker.WasLockAquired)
{
// ...
}
}
Which I think will help to minimise your code changes.