I’m running into problems using MSDTC and Oracle. It’s a .net application and I’m using the TransactionScope class to control the transactions.
The problem is that, sometimes, if the transaction is rolled back (scope.Dispose is called without having called scope.Complete), it stays in “Aborting” state for a long time, not releasing the locked records. Even though the transactions stays in “Aborting” state, when Dispose is called to abort the transaction, it returns immediately, so the thread doesn’t get stuck.
Does anyone know what could cause the transaction to behave like this and keep the locks after abort has been called?
Thanks
There are known issues around the use of distributed transactions when using the Microsoft Data Provider for Oracle.
If you are using it, try switching to the ODP.NET provider, which should fix your transaction problems.
Related
I'm using Dapper, but this applies the same to ADO.NET code.
I have an operation on a web app that changes a lot of state in the database. To ensure an all-or-nothing result, I use a transaction to manage this. To do this, all my Repository classes share a connection (which is instantiated per request). On my connection I can call Connection.BeginTransaction().
However, this operation can sometimes take a while (say 10 seconds), and it's locking some frequently-read-from tables while it does it's thing. I want to allow other repos on other threads to continue without locking while this is happening.
It looks like I need to do 2 things to make this happen:
1) Set the IsoloationLevel to something like ReadUncommited:
_transaction = Connection.BeginTransaction(IsolationLevel.ReadUncommitted);
2) For all other connections that don't need a transaction, I still need to enroll those connections in a transaction, so that I can again set ReadUncommited. If I don't do this then they'll still lock while they wait for the long running operation to complete.
So does this mean I need ALL my connections to start a transaction? This sounds expensive and sub-performant. Are there other solutions I'm missing here?
Thanks
Be aware that there is a trade-off between using locks or not, it's about performance vs concurrency control. Therefore, I don't think you should use ReadUncommited all the time.
If you try to use ReadUncommited on all other transactions that need not to be blocked by this long running transaction, they will by accident not be blocked also by other transactions.
Generally, we use this isolation level when performance is the first priority and does not need data accuracy
I want to allow other repos on other threads to continue without
locking while this is happening.
I think you can try IsolationLevel.SnapShot on only the transaction that does long locking work: https://msdn.microsoft.com/en-us/library/tcbchxcb(v=vs.110).aspx
Extracted from the link:
The term "snapshot" reflects the fact that all queries in the
transaction see the same version, or snapshot, of the database, based
on the state of the database at the moment in time when the
transaction begins. No locks are acquired on the underlying data rows
or data pages in a snapshot transaction, which permits other
transactions to execute without being blocked by a prior uncompleted
transaction. Transactions that modify data do not block transactions
that read data, and transactions that read data do not block
transactions that write data, as they normally would under the default
READ COMMITTED isolation level in SQL Server. This non-blocking
behavior also significantly reduces the likelihood of deadlocks for
complex transactions.
Be aware that an enormous amount of data could be generated in tempdb for version store if there are a lot of modifications.
we are building a WinForms desktop application which talks to an SQL Server through NHibernate. After extensive research we settled on the Session / Form strategy using Ninject to inject a new ISession into each Form (or the backing controller to be precise). So far it is working decently.
Unfortunately the main Form holds a lot of data (mostly read-only) which gets stale after some time. To prevent this we implemented a background service (really just a seperate class) which polls the DB for changes and issues an event which lets the main form selectively update the changed rows.
This background service also gets a separate session to minimize interference with the other forms. Our understanding was that it is possible to open a transaction per session in parallel as long as they are not nested.
Sadly this doesn't seem to be the case and we either get an ObjectDisposedException in one of the forms or the service (because the service session used an existing transaction from on of the forms and committed it, which fails the commit in the form or the other way round) or we get an InvalidOperationException stating that "Parallel transactions are not supported by SQL Server".
Is there really no way to open more than one transaction in parallel (across separate sessions)?
And alternatively is there a better way to update stale data in a long running form?
Thanks in advance!
I'm pretty sure you have messed something up, and are sharing either session or connection instances in ways you did not intend.
It can depend a bit on which sort of transactions you use:
If you use only NHibernate transactions (session.BeginTransaction()), each session acts independently. Unless you do something special to insert your own underlying database connections (and made an error there), each session will have their own connection and transaction.
If you use TransactionScope from System.Transactions in addition to the NHibernate transactions, you need to be careful about thread handling and the TransactionScopeOption. Otherwise different parts of your code may unexpectedly share the same transaction if a single thread runs through both parts and you haven't used TransactionScopeOption.RequiresNew.
Perhaps you are not properly disposing your transactions (and sessions)?
I am firing off a very long running query via BeginExecuteNonQuery.
I want to close my connection and allow this query to continue its work on the server (I will check on the results later via a new connection/thread).
However, it seems that whenever my connection is garbage collected (even if I didn't specifically close it), the server-side process is terminated.
I am using connection pooling and suspect this could be the problem - can anyone suggest a solution for me?
Thanks!
Edit:
Someone mentioned this in a comment (and then deleted the comment?!)
http://rusanu.com/2009/08/05/asynchronous-procedure-execution/
It is quite useful, but required that I create a service, queue and a couple of stored procs, which I don't really want to do.
Also, I cannot guaranteed the Sql Agent Service being active so cannot be create jobs to run the SQL, unfortunately.
Close the connection inside of the callback that you register with BeginExecuteNonQuery. As a side-effect this will keep the SqlConnection object alive. Also you need to ensure disposal anyway, even for a long-running statement.
When trying to debug something I found code that was effectively doing the following:
Creating a TransactionScope
Creating a Transaction (in this case an nHibernate tx, but not really important)
Creating a second transaction (in this case a standard ADO.Net Tx)
Committing the second transaction
Calling Complete() on the Transaction scope
Disposing the Transaction Scope.
Now - Creating a transaction and not committing is probably a bad idea anyway - especially when having (and that was the bug fix).
However when testing this - I tried various combinations of the above (committing all transactions, some transactions, no transactions (i.e. only TScope) committing the First, but not second, adding other transactions etc) and in all thesting I found that the following to be true:
Only when I failed to commit the first transaction AND the transaction scope became distributed, the Dispose of the TScope would fail with:
System.InvalidOperationException : The operation is not valid for the current state of the enlistment.
I am now curious and would like to know why this is the case?
I suspect the problem you see is covered by one of these: https://nhibernate.jira.com/issues/?jql=project%20%3D%2010000%20AND%20labels%20%3D%20TransactionScope
I'm not entirely sure what happens but I've seen similar behaviour, e.g. if NH enlists in the ambient transaction, and the transaction later becomes distributed, calling TransactionScope.Complete() might hang for 20 seconds and then fail.
NH will try to enlist in a TransactionScope even if you don't use an NH transaction. In this case, NH will flush changes during the ambient transaction's Prepare() phase. It will do this on the db connection, but that has also enlisted in the transaction and will get its own Prepare() call. Unfortunately I haven't been able to figure out the exact problem, but I suspect what happens is that in some circumstances the db connections Prepare() will be called before NHibernate's Prepare(). The latter will try to continue to use the db connection, and it appears this causes some sort of deadlock.
Using a NH transaction and committing this before completing the transaction scope will make NH flush its changes before the underlying DB connection enters the prepare-phase.
Greetings
I stumbled onto a problem today that seems sort of impossible to me, but its happening...I'm calling some database code in c# that looks something like this:
using(var tran = MyDataLayer.Transaction())
{
MyDataLayer.ExecSproc(new SprocTheFirst(arg1, arg2));
MyDataLayer.CallSomethingThatEventuallyDoesLinqToSql(arg1, argEtc);
tran.Commit();
}
I've simplified this a bit for posting, but whats going on is MyDataLayer.Transaction() makes a TransactionScope with the IsolationLevel set to Snapshot and TransactionScopeOption set to Required. This code gets called hundreds of times a day, and almost always works perfectly. However after reviewing some data I discovered there are a handful of records created by "SprocTheFirst" but no corresponding data from "CallSomethingThatEventuallyDoesLinqToSql". The only way that records should exist in the tables I'm looking at is from SprocTheFirst, and its only ever called in this one function, so if its called and succeeded then I would expect CallSomethingThatEventuallyDoesLinqToSql would get called and succeed because its all in the same TransactionScope. Its theoretically possible that some other dev mucked around in the DB, but I don't think they have. We also log all exceptions, and I can find nothing unusual happening around the time that the records from SprocTheFirst were created.
So, is it possible that a transaction, or more properly a declarative TransactionScope, with Snapshot isolation level can fail somehow and only partially commit?
We have spotted the same issue. I have recreated it here - https://github.com/DavidBetteridge/MSMQStressTest
For us we see the issue when reading from the queue rather than writing to it. Our solution was to change the isolation level of the first read in the subscriber to be serialised.
no, but snapshot isolation level isn't the same as serializable.
snapshoted rows are stored in the tempdb until the row commits.
so some other transaction can read the old data just fine.
at least that's how i understood your problem. if not please provide more info like a grapf of the timeline or something similar.
Can you verify that CallSomethingThatEventuallyDoesLinqToSQL is using the same Connection as the first call? Does the second call read data that the first filed into the db... and if it is unable to "see" that data would cause the second to skip a few steps and not do it's job?
Just because you have it wrapped in a .NET transaction doesn't mean the data as seen in the db is the same between connections. You could for instance have connections to two different databases and want to rollback both if one failed, or file data to a DB and post a message to MSMQ... if MSMQ operation failed it would roll back the DB operation too. .NET transaction would take care of this multi-technology feature for you.
I do remember a problem in early versions of ADO.NET (maybe 3.0) where the pooled connection code would allocate a new db connection rather than use the current one when a .NET level TransactionScope was used. I believe it was fully implemented with 3.5 (I may have my versions wrong.. might be 3.5 and 3.5.1). It could also be caused by the MyDataLayer and how it allocates connections.
Use SQL Profiler to trace these operations and make sure the work is being done on the same spid.
It sounds like your connection may not be enlisted in the transaction. When do you create your connectiion object? If it occurs before the TransactionScope then it will not be enlisted in the transaction.