Transaction Scope - ORA-02089 - c#

I'm using Transaction Scope in my project to control transactions between several SQL instructions (insert, update, delete).
One of my instruction is call a existent stored procedure in database, but into this procedure there is a commit struction. I can't change it because it's used in others process.
When I execute, I get the error: ORA-02089: COMMIT is not allowed in a subordinate session
I'm declaring the transaction as the code bellow:
TransactionOptions options = new TransactionOptions();
options.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted;
using (TransactionScope transacao = new TransactionScope(TransactionScopeOption.Required, options))
{
....
transacao.complete()
}
What can I do to resolve that?

Related

Setback Isolation-Level with C# in SQL-Server 2016 after call [duplicate]

As demonstrated by previous Stack Overflow questions (TransactionScope and Connection Pooling and How does SqlConnection manage IsolationLevel?), the transaction isolation level leaks across pooled connections with SQL Server and ADO.NET (also System.Transactions and EF, because they build on top of ADO.NET).
This means, that the following dangerous sequence of events can happen in any application:
A request happens which requires an explicit transaction to ensure data consistency
Any other request comes in which does not use an explicit transaction because it is only doing uncritical reads. This request will now execute as serializable, potentially causing dangerous blocking and deadlocks
The question: What is the best way to prevent this scenario? Is it really required to use explicit transactions everywhere now?
Here is a self-contained repro. You will see that the third query will have inherited the Serializable level from the second query.
class Program
{
static void Main(string[] args)
{
RunTest(null);
RunTest(IsolationLevel.Serializable);
RunTest(null);
Console.ReadKey();
}
static void RunTest(IsolationLevel? isolationLevel)
{
using (var tran = isolationLevel == null ? null : new TransactionScope(0, new TransactionOptions() { IsolationLevel = isolationLevel.Value }))
using (var conn = new SqlConnection("Data Source=(local); Integrated Security=true; Initial Catalog=master;"))
{
conn.Open();
var cmd = new SqlCommand(#"
select
case transaction_isolation_level
WHEN 0 THEN 'Unspecified'
WHEN 1 THEN 'ReadUncommitted'
WHEN 2 THEN 'ReadCommitted'
WHEN 3 THEN 'RepeatableRead'
WHEN 4 THEN 'Serializable'
WHEN 5 THEN 'Snapshot'
end as lvl, ##SPID
from sys.dm_exec_sessions
where session_id = ##SPID", conn);
using (var reader = cmd.ExecuteReader())
{
while (reader.Read())
{
Console.WriteLine("Isolation Level = " + reader.GetValue(0) + ", SPID = " + reader.GetValue(1));
}
}
if (tran != null) tran.Complete();
}
}
}
Output:
Isolation Level = ReadCommitted, SPID = 51
Isolation Level = Serializable, SPID = 51
Isolation Level = Serializable, SPID = 51 //leaked!
The connection pool calls sp_resetconnection before recycling a connection. Resetting the transaction isolation level is not in the list of things that sp_resetconnection does. That would explain why "serializable" leaks across pooled connections.
I guess you could start each query by making sure it's at the right isolation level:
if not exists (
select *
from sys.dm_exec_sessions
where session_id = ##SPID
and transaction_isolation_level = 2
)
set transaction isolation level read committed
Another option: connections with a different connection string do not share a connection pool. So if you use another connection string for the "serializable" queries, they won't share a pool with the "read committed" queries. An easy way to alter the connection string is to use a different login. You could also add a random option like Persist Security Info=False;.
Finally, you could make sure every "serializable" query resets the isolation level before it returns. If a "serializable" query fails to complete, you could clear the connection pool to force the tainted connection out of the pool:
SqlConnection.ClearPool(yourSqlConnection);
This is potentially expensive, but failing queries are rare, so you should not have to call ClearPool() often.
In SQL Server 2014 this seem to have been fixed. If using TDS protocol 7.3 or higher.
Running on SQL Server version 12.0.2000.8 the output is:
ReadCommitted
Serializable
ReadCommitted
Unfortunately this change is not mentioned in any documentation such as:
Behavior Changes to Database Engine Features in SQL Server 2014
Breaking Changes to Database Engine Features in SQL Server 2014
But the change has been documented on a Microsoft Forum.
Update 2017-03-08
Unfortunately this was later "unfixed" in SQL Server 2014 CU6 and SQL Server 2014 SP1 CU1 since it introduced a bug:
FIX: The transaction isolation level is reset incorrectly when the SQL Server connection is released in SQL Server 2014
"Assume that you use the TransactionScope class in SQL Server client-side source code, and you do not explicitly open the SQL Server connection in a transaction. When the SQL Server connection is released, the transaction isolation level is reset incorrectly."
Workaround
It appears that, since passing through a parameter makes the driver use sp_executesql, this forces a new scope, similar to a stored procedure. The scope is rolled back after the end of the batch.
Therefore, to avoid the leak, pass through a dummy parameter, as show below.
using (var conn = new SqlConnection(connString))
using (var comm = new SqlCommand(#"
SELECT transaction_isolation_level FROM sys.dm_exec_sessions where session_id = ##SPID
", conn))
{
conn.Open();
Console.WriteLine(comm.ExecuteScalar());
}
using (var conn = new SqlConnection(connString))
using (var comm = new SqlCommand(#"
SET TRANSACTION ISOLATION LEVEL SNAPSHOT;
SELECT transaction_isolation_level FROM sys.dm_exec_sessions where session_id = ##SPID
", conn))
{
comm.Parameters.Add("#dummy", SqlDbType.Int).Value = 0; // see with and without
conn.Open();
Console.WriteLine(comm.ExecuteScalar());
}
using (var conn = new SqlConnection(connString))
using (var comm = new SqlCommand(#"
SELECT transaction_isolation_level FROM sys.dm_exec_sessions where session_id = ##SPID
", conn))
{
conn.Open();
Console.WriteLine(comm.ExecuteScalar());
}
For those using EF in .NET, you can fix this for your whole application by setting a different appname per isolation level (as also stated by #Andomar):
//prevent isolationlevel leaks
//https://stackoverflow.com/questions/9851415/sql-server-isolation-level-leaks-across-pooled-connections
public static DataContext CreateContext()
{
string isolationlevel = Transaction.Current?.IsolationLevel.ToString();
string connectionString = ConfigurationManager.ConnectionStrings["yourconnection"].ConnectionString;
connectionString = Regex.Replace(connectionString, "APP=([^;]+)", "App=$1-" + isolationlevel, RegexOptions.IgnoreCase);
return new DataContext(connectionString);
}
Strange this is still an issue 8 years later ...
I just asked a question on this topic and added a piece of C# code, which can help around this problem (meaning: change isolation level only for one transaction).
Change isolation level in individual ADO.NET transactions only
It is basically a class to be wrapped in an 'using' block, which queries the original isolation level before and restores it later.
It does, however, require two additional round trips to the DB to check and restore the default isolation level, and I am not absolutely sure that it will never leak the altered isolation level, although I see very little danger of that.

Why does SQL Server not respect the .Net isolation level?

I have this code:
var to = new TransactionOptions();
to.IsolationLevel = System.Transactions.IsolationLevel.ReadUncommitted;
using (var ts = new TransactionScope(TransactionScopeOption.Required, to))
{
someQuery.ToList();
ts.Complete();
}
No matter what, the SQL Server Profiler shows that "someQuery" (and any other query on this transaction) run with an isolation level of "Read Committed".
The only way I can force to run as ReadUncommitted if, before someQuery.ToList();, I execute this line:
myContext.Database.ExecuteSqlCommand("SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;");
Why does SQL Server not respect the .Net isolation level? What can I do about this?

SSIS Script task enlist in the current transaction

I have a package in SSIS with multiple task. I am loading file,if the Filesystem task at the end fail i want to be able to rollback the transaction.My package look like that.
I like to be able to rollback all the operations the SSIS script have been done.To do that i need the SSIS script to enlist in the transaction created by the BEGIN_TRANSACTION Sql task.How could i do that ?
In ssis to gather the transaction i do :
object rawConnection = Dts.Connections["destination_ado"].AcquireConnection(Dts.Transaction);
myADONETConnection = (SqlConnection)rawConnection;
Then i do a BulkCopy:
using (SqlBulkCopy sbc = new SqlBulkCopy(myADONETConnection))
{
sbc.DestinationTableName = "[" + SCHEMA_DESTINATION + "].[" + TABLE_DESTINATION + "]";
// sbc.DestinationTableName = "test_load";
// Number of records to be processed in one go
sbc.BatchSize = 10000;
// Finally write to server
sbc.WriteToServer(destination);
}
myADONETConnection.Close();
How do I tell the SqlBulkCopy to use the existing transaction ?
In the options of the connection in SSIS i use RetainSameConnection:true
Thanks for all your thought
Vincent
Looking at your package, I see that you are iterating through bunch of files and for each iteration you are loading the files content into your destination tables.
You want all your data loads to be atomic i.e. fully loaded or none at all.
With this in mind I would like to suggest the following approaches and in all of these appraoches there is no need of using Script Task or Begin/End Transaction blocks explicitly -
Use a Data Flow Task and in the properties set TransactionOption to Required. This will do the job of enabling the transaction on the block
Have a error-redirection at the destination to a error table in a batch-wise manner so as to minimize the errors to the lowest minimum possible (such as -http://agilebi.com/jwelch/2008/09/05/error-redirection-with-the-ole-db-destination/). We used 100k, 50k, 1 as 3 batches successfully when doing data loads of over million per day. You can then deal with those errors seperately.
If the use case is such that the whole data has to fail then just redirect the failed records. Move the record to 'failed' folder using File System Task(FST). Have a DFT following the FST to perform a lookup on the destination and then deleting all those records.
So i found a solution.
On the first Script block (extract and load )i create a transaction with this code:
SqlTransaction tran = myADONETConnection.BeginTransaction(IsolationLevel.ReadCommitted);
Then i use this transaction in the SqlBulkCopy this way :
using (SqlBulkCopy sbc = new SqlBulkCopy(myADONETConnection,SqlBulkCopyOptions.Default,tran))
Pass the transaction object to an SSIS variable :
Dts.Variables["User::transaction_object"].Value = tran;
Then on my two block at the end Commit transaction and Rolloback transaction i use SSIS script, read the variable and either commit or rollback the transaction:
SqlTransaction tran = (SqlTransaction)Dts.Variables["User::transaction_object"].Value;
tran.Commit();
As a result if a file cannot be move to the Archive folder i don't get load twice,a transaction is fire for each file so if a file can't be more only the data about this file get rollback and the enumerator keep on going to the next one.

Using EF transaction across libraries

I have this class A that begins an EF transaction where UserDb is my DbContext
using (DbContextTransaction dbTransaction = UserDb.Database.BeginTransaction(IsolationLevel.ReadUncommitted))
Then I have several inserts and there is a need to call another library [which essentially lives on the same server in the bin folder] to do another insert.
new ExtLibrary().CreatePoweruser(3, UserDb);
As you can see I am passing the same connection. And this statement is within the top using which I thought would mean that everythign is in the same transaction.
Extlibrary code:
Data.Entities.User UserEntity = new Data.Entities.User {
UserTypeId =34,
CreatedDate = DateTime.Now,
CreatedBy = "mk92Test",
};
UserDb.Users.Add(UserEntity);
UserDb.SaveChanges();
Everything works unless the ExtLibrary insert fails. Control comes back to the parent class which has rollback code on exception and I get an The underlying provider failed on rollback. But the first set of inserts certainly do rollback even after this exception.
Please advise.

Stored Procedure without transaction in Entity Framework

I'm calling a stored procedure in Entity Framework 6 that can create Databases and tables if necessary. It is throwing the error;
Message "CREATE DATABASE statement not allowed within multi-statement transaction.\r\nALTER DATABASE statement not allowed within multi-statement transaction.\r\nDatabase 'CoreSnapshotJS3' does not exist. Make sure that the name is entered correctly." string
I do not want it in a transaction, and have used this to supress the transaction
using (var transation = new TransactionScope(TransactionScopeOption.Suppress))
{
return ((IObjectContextAdapter)this).ObjectContext.ExecuteFunction("spCreateSnapshotFromQueue", snapshotQueueIDParameter);
}
It still throws an error.
How do I stop automatic transactions?
I found a way:
var snapshotQueueIDParameter = new SqlParameter("SnapshotQueueID", entityId);
return _db.Database.ExecuteSqlCommand(TransactionalBehavior.DoNotEnsureTransaction,
"EXEC spCreateSnapshotFromQueue #SnapshotQueueID", snapshotQueueIDParameter);

Categories

Resources