I have decided to try using a TransactionScope, rather than the SqlTransaction class.
The following is my code, wrapped in a TransactionScope:
using (var transaction = new System.Transactions.TransactionScope())
{
using (MySqlCommand cmd = new MySqlCommand(sql, connection))
{
if (listParameters != null && listParameters.Count > 0)
{
foreach (string currentKey in listParameters.Keys)
{
cmd.Parameters.Add(new MySqlParameter(currentKey, GetDictionaryValue(listParameters, currentKey)));
}
}
using (MySqlDataReader reader = cmd.ExecuteReader())
{
dtResults.Load(reader);
}
}
transaction.Complete();
}
The code works, however I am not binding the MySqlCommand cmd object with a transaction at any point. Is this a problem?
No, this is not the correct use.
The correct use is to create a connection after creating TransactionScope. Then the connection will detect the ambient TransactionScope and enlist itself.
using (var transaction = new System.Transactions.TransactionScope())
{
using (var connection = new MySqlConnection())
{
...
}
}
If you create the connection before the scope, that connection will be out of that scope, even if you create the command after creating the scope.
Also note that TransactionScope defaults to Serializable level of isolation. This is the most secure level, but also the least concurrent one. You often want to explicitly set a more common isolation level:
using (var transaction = new TransactionScope(
TransactionScopeOption.Required,
new TransactionOptions() { IsolationLevel = IsolationLevel.ReadCommitted }))
{
}
Related
I'm trying to figure out how to setup my database access correctly while using the SqlClient hitting a Microsoft SQL Server. For the most part it is working, but there's a particular scenario that is giving me trouble. Namely: attempting to simultaneously use two connections in the same thread; one with an open data reader and the other performing a delete operation.
The following code demonstrates my conundrum:
public class Database {
...
internal SqlConnection CreateConnection() => new SqlConnection(connectionString);
...
}
public IEnumerable<Model> GetModel() {
var cmd = new SqlCommand() { ... };
using(var conn = db.CreateConnection()) {
conn.Open();
cmd.Connection = conn;
using(var reader = cmd.ExecuteReader()) {
while(reader.Read()) {
var m = new Model();
// deserialization logic
yield return m;
}
}
}
}
public void Delete(int id) {
var cmd = new SqlCommand() { ... }
using(var conn = db.CreateConnection()) {
conn.Open(); // throwing the error here
cmd.Connection = conn;
cmd.ExecuteNonQuery();
}
}
Application Code:
using(var scope = new TransactionScope()) {
var models = GetModels();
foreach(var m in models) {
Delete(m.Id); // throws an exception
}
scope.Complete();
}
For whatever reason, an exception is thrown by the above code while trying to execute the Delete operation:
quote
System.Transactions.TransactionAbortedException: The transaction has aborted. ---> System.Transactions.TransactionPromotionException: Failure while attempting to promote transaction. ---> System.Data.SqlClient.SqlException: There is already an open DataReader associated with this Command which must be closed first. ---> System.ComponentModel.Win32Exception: The wait operation timed out
quote
Now, I have confirmed that if I either set MultipleActiveResultSets=true or Pooling=false on the ConnectionString, that then the above application code will work without error. However, it doesn't seem like I should need to set either of those. If I open two connections simultaneously, should they not be separate connections? Why then am I getting an error from the Delete connection saying that there's an open DataReader?
Please help <3
By far the easiest fix here is to simply load all the models outside the transaction before you go deleting any. Eg
var models = GetModels().ToList();
using(var scope = new TransactionScope()) {
foreach(var m in models) {
Delete(m.Id); // throws an exception
}
scope.Complete();
}
Even fetching the models inside the transaction shold work
using(var scope = new TransactionScope()) {
var models = GetModels().ToList();
foreach(var m in models) {
Delete(m.Id); // throws an exception
}
scope.Complete();
}
so long as you don't leave the connection open during the iteration. If you allow the connection in GetModels() to close, it will be returned to the connection pool, and be available for use for subsequent methods that are enlisted in the same transaction.
In the current code the connection in GetModels() is kept open during the foreach loop and Delete(id) has to open a second connection and try to create a distributed transaction, which is failing.
Without MultipleActiveResultsets, the GetModels connection can't be promoted to a distributed transaction in the middle of returning query results. Setting pooling=false will not make this error go away.
Here's a simplified repro to play with:
using Microsoft.Data.SqlClient;
using System.Collections.Generic;
using System.Transactions;
namespace SqlClientTest
{
class Program
{
static void Main(string[] args)
{
Setup();
var topt = new TransactionOptions();
topt.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted;
using (new TransactionScope(TransactionScopeOption.Required, topt ))
{
foreach (var id in GetIds())
{
Delete(id);
}
}
}
static string constr = #"server=.;database=tempdb;Integrated Security=true;TrustServerCertificate=true;";
public static void Setup()
{
using (var con = new SqlConnection(constr))
{
con.Open();
var cmd = con.CreateCommand();
cmd.CommandText = "drop table if exists ids; select object_id id into ids from sys.objects";
cmd.ExecuteNonQuery();
}
}
public static IEnumerable<int> GetIds()
{
using (var con = new SqlConnection(constr))
{
con.Open();
var cmd = con.CreateCommand();
cmd.CommandText = "select object_id id from sys.objects";
using (var reader = cmd.ExecuteReader())
{
while (reader.Read())
{
yield return reader.GetInt32(0);
}
}
}
}
public static void Delete(int id)
{
using (var con = new SqlConnection(constr))
{
con.Open();
var cmd = con.CreateCommand();
cmd.CommandText = "insert into ids(id) values (#id)";
cmd.Parameters.Add(new SqlParameter("#id", id));
cmd.ExecuteNonQuery();
}
}
}
}
And here's what Profiler shows when run:
The main reason here as far as I understand is your yielding iteration.
So the DB connection has not yet called disposed as it's still being used in your iteration (foreach). If for example, you called .ToList() at that point it should return all the entries and then dispose of the connection.
See here for a better explanation on how yield works in an iteration: https://stackoverflow.com/a/58449809/3329836
I am trying to use the Data Entity Framework to create my Data Access Layer. In the work I have done up to now I have used ADO.Net. I am trying to get my head around how transactions work in EF. I ave read loads but its all confussed me even more than I was before!! I would usually do somethink like (simplified for example):
using (SqlConnection conn = new SqlConnection(_connString))
{
using (SqlTransaction trans = conn.BeginTransaction())
{
try
{
using (SqlCommand cmd = new SqlCommand("usp_CreateNewInvoice", conn))
{
cmd.Transaction = trans;
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add(new SqlParameter("#InvoiceName", Invoice.invoicename));
cmd.Parameters.Add(new SqlParameter("#InvoiceAddess", Invoice.invoiceaddress));
_invoiceid = Convert.ToInt32(cmd.ExecuteScalar());
}
foreach (InvoiceLine inLine in Invoice.Lines)
{
using (SqlCommand cmd = new SqlCommand("usp_InsertInvoiceLines", conn))
{
cmd.Transaction = trans;
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add(new SqlParameter("#InvNo", _invoiceid));
cmd.Parameters.Add(new SqlParameter("#InvLineQty", inLine.lineqty));
cmd.Parameters.Add(new SqlParameter("#InvLineGrossPrice", inLine.linegrossprice));
cmd.ExecuteNonQuery();
}
}
trans.Commit();
}
catch (SqlException sqlError)
{
trans.Rollback();
}
}
}
Now I want to do the same in EF:
using (TransactionScope scope = new TransactionScope())
{
try
{
var DbContext = new CCTStoreEntities();
CCTInvoice invHead = new CCTInvoice();
invHead.Name = Invoice.invoicename;
invHead.Address = Invoice.invoiceaddress;
DbContext.CCTInvoices.Add(invHead);
DbContext.SaveChanges(false);
_iid = invHead.InvoiceId;
foreach (InvoiceLine inLine in Invoice.Lines)
{
CCTInvoiceLine invLine = new CCTInvoiceLine();
invLine.InvoiceNo = _iid;
invLine.Quantity = inLine.lineqty;
invLine.GrossPrice = inLine.linegrossprice;
DbContext.CCTInvoiceLines.Add(invHead);
DbContext.SaveChanges(false);
}
DbContext.SaveChanges();
scope.Complete();
}
catch
{
//Something went wrong
//Rollback!
}
}
From what I read SaveChanges(false) means the changes being made will continue to be tracked. But how do I rollback the transaction if something goes wrong?
You don't need to do anything on your catch block. Just by not calling DbContext.SaveChanges no changes will be sent to the database, and they will be lost once DbContext is disposed.
You do have a problem though. DbContext must be wrapped on a using block as follows to be properly disposed. BTW, I don't think DbContext.SaveChanges(false); is needed, your code should work with just the final DbContext.SaveChanges();. EF will take care of wiring up all your Foreign Keys, so you don't need to do that explicitly.
using (TransactionScope scope = new TransactionScope())
{
try
{
using (var DbContext = new CCTStoreEntities())
{
CCTInvoice invHead = new CCTInvoice();
invHead.Name = Invoice.invoicename;
invHead.Address = Invoice.invoiceaddress;
DbContext.CCTInvoices.Add(invHead);
DbContext.SaveChanges(false); // This is not needed
_iid = invHead.InvoiceId; // This is not needed
foreach (InvoiceLine inLine in Invoice.Lines)
{
CCTInvoiceLine invLine = new CCTInvoiceLine();
invLine.InvoiceNo = _iid; // This is not needed
invLine.Quantity = inLine.lineqty;
invLine.GrossPrice = inLine.linegrossprice;
DbContext.CCTInvoiceLines.Add(invHead);
DbContext.SaveChanges(false); // This is not needed
}
DbContext.SaveChanges();
scope.Complete();
}
}
catch
{
//Something went wrong
//Rollback!
}
}
The rollback mechanism in a TransactionScope is implicit.
Basically, if you don't call Complete before the TransactionScope gets disposed it will automatically rollback. See the Rolling back a transaction section in Implementing an Implicit Transaction using Transaction Scope.
So technically, you don't even need to use a try...catch here (unless you want to perform some other action like logging).
The easiest way to illustrate my question is with this C# code:
using (SqlCommand cmd = new SqlCommand("SELECT * FROM [tbl]", connectionString))
{
using (SqlDataReader rdr = cmd.ExecuteReader())
{
//Somewhere at this point a concurrent thread,
//or another process changes the [tbl] table data
//Begin reading
while (rdr.Read())
{
//Process the data
}
}
}
So what would happen with the data in rdr in such situation?
I actually tested this. Test code:
using (SqlConnection conn = new SqlConnection(ConfigurationManager.ConnectionStrings["test"].ConnectionString))
{
conn.Open();
using (SqlCommand comm = new SqlCommand("select * from test", conn))
{
using (var reader = comm.ExecuteReader())
{
int i = 0;
while (reader.Read())
{
if ((string)reader[1] == "stop")
{
throw new Exception("Stop was found");
}
}
}
}
}
To test, I initialized the table with some dummy data (making sure that no row with the value 'stop' was included). Then I put a break point on the line int i = 0;. While the execution was halted on the break point, I inserted a line in the table with the 'stop' value.
The result was that depending on the amount of initial rows in the table, the Exception was thrown/not thrown. I did not try to pin down where exactly the row limit was. For ten rows, the Exception was not thrown, meaning the reader did not notice the row added from another process. With ten thousand rows, the exception was thrown.
So the answer is: It depends. Without wrapping the command/reader inside a Transaction, you cannot rely on either behavior.
Obligatory disclaimer: This is how it worked in my environment...
EDIT:
I tested using a local Sql server on my dev machine. It reports itself as:
Microsoft SQL Server 2008 R2 (SP1) - 10.50.2550.0 (X64)
Regarding transactions:
Here's code where I use a transaction:
using (SqlConnection conn = new SqlConnection(ConfigurationManager.ConnectionStrings["test"].ConnectionString))
{
conn.Open();
using (var trans = conn.BeginTransaction())
using (SqlCommand comm = new SqlCommand("select * from test", conn, trans))
{
using (var reader = comm.ExecuteReader())
{
int i = 0;
while (reader.Read())
{
i++;
if ((string)reader[1] == "stop")
{
throw new Exception("Stop was found");
}
}
}
trans.Commit();
}
}
In this code, I create the transaction without explicitly specifying an isolation level. That usually means that System.Data.IsolationLevel.ReadCommitted will be used (I think the default isolation level can be set in the Sql Server settings somewhere). In that case the reader behaves the same as before. If I change it to use:
...
using (var trans = conn.BeginTransaction(System.Data.IsolationLevel.Serializable))
...
the insert of the "stop" record is blocked until the transaction is comitted. This means that while the reader is active, no changes to underlying the data is allowed by Sql Server.
Playing with transactions for the first time I thought I'd get the following code to work:
namespace database
{
class Program
{
static string connString = "Server=ServerName;Database=Demo;Trusted_Connection=True;";
SqlConnection connection = new SqlConnection(connString);
static Random r = new Random();
static void Add()
{
try
{
Thread.Sleep(r.Next(0, 10));
using (var trans = new TransactionScope())
{
using (var conn = new SqlConnection(connString))
{
conn.Open();
var count = (int)new SqlCommand("select balance from bank WITH (UPDLOCK) where owner like '%Jan%'", conn).ExecuteScalar();
Thread.Sleep(r.Next(0, 10));
SqlCommand cmd = new SqlCommand("update bank set balance = " + ++count + "where owner like '%Jan%'", conn);
cmd.ExecuteNonQuery();
}
trans.Complete();
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
}
static void Remove()
{
try
{
Thread.Sleep(r.Next(0, 10));
using (var trans = new TransactionScope())
{
using (var conn = new SqlConnection(connString))
{
conn.Open();
var count = (int)new SqlCommand("select balance from bank WITH (UPDLOCK) where owner like '%Jan%'", conn).ExecuteScalar();
Thread.Sleep(r.Next(0, 10));
SqlCommand cmd = new SqlCommand("update bank set balance = " + --count + "where owner like '%Jan%'", conn);
cmd.ExecuteNonQuery();
}
trans.Complete();
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
}
static void Main(string[] args)
{
for (int i = 0; i < 5; i++)
{
Thread t = new Thread(new ThreadStart(Add));
t.Start();
}
for (int i = 0; i < 5; i++)
{
Thread t = new Thread(new ThreadStart(Remove));
t.Start();
}
Console.ReadLine();
}
}
}
I assumed that at the end after 100 adds and 100 subtractions my balane would be the same as my starting point - 100, however it keeps changing up and down every time I run the script. Even with isolationlevel serializable. Could anyone tell me why? O_o
EDIT: Moved connection opening and closing to inside the transaction scope.
The problem now is that I get "Transaction (Process ID XX) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction"
Like Marc Gravell Said:
Putting the connection inside the transaction scope and adding UPDLOCK to the select query combined with changing isolationlevel to repeatableRead did the trick :)
static void Add()
{
try
{
Thread.Sleep(r.Next(0, 10));
using (var trans = new TransactionScope(TransactionScopeOption.Required, new TransactionOptions() { IsolationLevel = IsolationLevel.RepeatableRead }))
{
using (var conn = new SqlConnection(connString))
{
conn.Open();
var count = (int)new SqlCommand("select balance from bank WITH (UPDLOCK) where owner like '%Jan%'", conn).ExecuteScalar();
Thread.Sleep(r.Next(0, 10));
SqlCommand cmd = new SqlCommand("update bank set balance = " + ++count + "where owner like '%Jan%'", conn);
cmd.ExecuteNonQuery();
}
trans.Complete();
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
}
1: currently the TransactionScope might be redundant and unused; try changing the transaction to wrap the connection, not the other way around (oh, and use using):
using (var trans = new TransactionScope(TransactionScopeOption.Required,
new TransactionOptions() { IsolationLevel = IsolationLevel.Serializable }))
using (var conn = new SqlConnection(connString))
{
conn.Open();
//...
trans.Complete();
}
this way, the connection should enlist correctly inside the transaction (and be cleaned up properly if something bad happens)
I think the above is the main problem; i.e. not enlisting in the transaction. That means that there can be lost changes, since the read/write operation is not actually being raised to a higher isolation level.
2: however, if you do that by itself, I expect you'll see deadlocks. To avoid deadlocks, if you know you're going to update, you might want to use (UPDLOCK) on that select - this will take a write lock at the start, so that if there is a competing thread you get a block rather than a deadlock.
To be clear, this deadlock scenario is caused by:
thread A reads the row, getting a read lock
thread B reads the row, getting a read lock
thread A tries to update the row, and is blocked by B
thread B tries to update the row, and is blocked by A
Adding the UPDLOCK, this becomes:
thread A reads the row, getting a write lock
thread B tries to read the row, and is blocked by A
thread A updates the row
thread A completes the transaction
thread B is able to continue, reads the row, getting a write lock
thread B updates the row
thread B completes the transaction
3: but querying to do a trivial update is silly; better just to issue an in-place update without selecting, i.e. update bank set balance = balance + 1 where ...
You have to open the connection inside the TransactionScope block.
Instead of
var conn = new SqlConnection(connString);
conn.Open();
using (var trans = new TransactionScope(TransactionScopeOption.Required, new TransactionOptions() { IsolationLevel = IsolationLevel.Serializable }))
{
// do stuff
}
use it like this
using (var trans = new TransactionScope(TransactionScopeOption.Required, new TransactionOptions() { IsolationLevel = IsolationLevel.Serializable }))
using (var conn = new SqlConnection(connString))
{
conn.Open();
// do stuff
}
This way opening the connection automatically enlists it in the TransactionScope as a lightweight transaction.
You can always look at the examples.
Hi just reading about using transaction scrope, previously I am used to making transactions inside a single DB class like
try
{
con.Open();
tran = con.BeginTransaction();
OleDbCommand myCommand1 = new OleDbCommand(query1, con);
OleDbCommand myCommand2 = new OleDbCommand(query2, con);
myCommand .Transaction = tran;
// Save Master
myCommand1.ExecuteNonQuery();
// Save Childred
myCommand2.ExecuteNonQuery();
// Commit transaction
tran.Commit();
}
catch (OleDbException ex)
{
tran.Rollback();
lblError.Text = "An error occured " + ex.ToString();
}
finally
{
if (con != null)
{
con.Close();
}
}
But now I come to know that I can execute transaction inside the Business Logic layer simply by using a transaction scope object and using separate DB classes like
public static int Save(Employee myEmployee)
{
using (TransactionScope myTransactionScope = new TransactionScope())
{
int RecordId = EmpDB.Save(myEmployee);
foreach (Address myAddress in myEmployee.Addresses)
{
myAddress.EmployeeId = EmployeeId;
AddressDB.Save(myAddress);
}
foreach (PhoneNumber myPhoneNumber in myEmployee.PhoneNumbers)
{
myPhoneNumber.EmployeeId = EmployeeId;
PhoneNumberDB.Save(myPhoneNumber);
}
myTransactionScope.Complete();
return EmployeeId;
}
}
Which one is the recommended coding practice and why ? Is using a Transaction Scope safe ? Is it the latest way to do things ? I am confused about both methods.
Thanks in advance.
One of the nice things about Transaction scope is that you don't need the try/catch block. You just have to call Complete on the scope in order to commit the transaction, and it rollsback automatically if an exception does occur.
You can also use other components that are able to participate in transactions, not just the DB connection. This is because the components (including the connection) look for a Transaction on the current thread. It is this transaction that is created by the call to
using (TransactionScope myTransactionScope = new TransactionScope())