I'm using nested transactions by using IDBConecction interface on c#. I have to methods that insert data into 2 different tables, but when it comes to the second insert the first insert transaction locks the second one causing a timeout exception.
public void FirstInsert()
{
using (var cn = new Connection().GetConnection())
{
cn.Open();
using (var tran = cn.BeginTransaction())
{
try
{
//1st insert
SecondInsert() //calling second insert method
tran.Commit();
}
catch
{
tran.Rollback();
}
}
}
}
public void SecondInsert()
{
using (var cn = new Connection().GetConnection())
{
cn.Open();
using (var tran = cn.BeginTransaction())
{
try
{
//2nd insert, this one fails
tran.Commit();
}
catch
{
tran.Rollback();
}
}
}
}
When I check on SqlServer fisrt insert has the SPID 56, then when the second insert is being performed with SPID 57, and I use
exec sp_who2
In the column "BlkBy" for SPID 57 it says it is blocked by SPID 56.
How can I overcome these problem?
Use one connection for both operations. This likely involves passing the connection object around.
Usually, the connection+transaction per request pattern solves this issue well. Opening a connection in all kinds of methods is a code smell. It shows that the infrastructure fails to handle that.
You are doing correct but there is no need of a separate connection object and transaction object in your second method since call to the secondinsert() is already inside transaction scope. Your code can simply be
public void FirstInsert(){
using (var cn = new Connection().GetConnection()){
cn.Open();
using (var tran = cn.BeginTransaction()){
try{
//1st insert
SecondInsert() //calling second insert method
tran.Commit();
}
catch{
tran.Rollback();
}
}
}
}
public void SecondInsert(){
//perform second insert operation
}
}
}
Related
I have the following code:
public void Execute(string Query, params SqlParameter[] Parameters)
{
using (var Connection = new SqlConnection(Configuration.ConnectionString))
{
Connection.Open();
using (var Command = new SqlCommand(Query, Connection))
{
if (Parameters.Length > 0)
{
Command.Parameters.Clear();
Command.Parameters.AddRange(Parameters);
}
Command.ExecuteNonQuery();
}
}
}
The method may be called 2 or 3 times for different queries but in same manner.
For example:
Insert an Employee
Insert Employee Certificates
Update Degree of Employee on another table [ Fail can cause here. for example ]
If Point [3] fails, all already committed commands shouldn't execute and must be rolled back.
I know I can put SqlTransaction above and use Commit() method. But what about 3rd point if failed? I think point 3 only will rollback and other point 1,2 will not? How to solve this and what approach should I do??
Should I use SqlCommand[] arrays? What I should I do?
I only find similar question but in CodeProject:
See Here
Without changing your Execute method you can do this
var tranOpts = new TransactionOptions()
{
IsolationLevel = IsolationLevel.ReadCommitted,
Timeout = TransactionManager.MaximumTimeout
};
using (var tran = new TransactionScope(TransactionScopeOption.Required, tranOpts)
{
Execute("INSERT ...");
Execute("INSERT ...");
Execute("UPDATE ...");
tran.Complete();
}
SqlClient will cache the internal SqlConnection that is enlisted in the Transaction and reuse it for each call to Execute. So you even end up with a local (not distributed) transaction.
This is all explained in the docs here: System.Transactions Integration with SQL Server
There are a few ways to do it.
The way that probably involves changing the least code and involves the least complexity is to chain multiple SQL statements into a single query. It's perfectly fine to build a string for the Query argument that runs more than one statement, including BEGIN TRANSACTION, COMMIT, and (if needed) ROLLBACK. Basically, keep a whole stored procedure in your C# code. This also has the nice benefit of making it easier to use version control with your procedures.
But it still feels kind of hackish.
One way to reduce that effect is marking the Execute() method private. Then, have an additional method in the class for each query. In this way, the long SQL strings are isolated, and when you're using the database it feels more like using a local API. For more complicated applications, this might instead be a whole separate assembly with a few types managing logical functional areas, where the core methods like Exectue() are internal. This is a good idea anyway, regardless of how you end up supporting transactions.
And speaking of procedures, stored procedures are also a perfectly fine way to handle this. Have one stored procedure to do all the work, and call it when ready.
Another option is overloading the method to accept multiple queries and parameter collections:
public void Execute(string TransactionName, string[] Queries, params SqlParameter[][] Parameters)
{
using (var Connection = new SqlConnection(Configuration.ConnectionString))
using (var Transaction = new SqlTransaction(TransactionName))
{
connection.Transaction = Transaction;
Connection.Open();
try
{
for (int i = 0; i < Queries.Length; i++)
{
using (var Command = new SqlCommand(Queries[i], Connection))
{
command.Transaction = Transaction;
if (Parameters[i].Length > 0)
{
Command.Parameters.Clear();
Command.Parameters.AddRange(Parameters);
}
Command.ExecuteNonQuery();
}
}
Transaction.Commit();
}
catch(Exception ex)
{
Transaction.Rollback();
throw; //I'm assuming you're handling exceptions at a higher level in the code
}
}
}
Though I'm not sure how the params keyword works with an array of arrays... I've just not tried that option, but something along these lines would work. The weakness here is also that it's not trivial to have a later query depend on a result from an earlier query, and even queries with no parameter would still need a Parameters array as a placeholder.
A final option is extending the type holding your Execute() method to support transactions. The trick here is it's common (and desirable) to have this type be static, but supporting transactions requires re-using common connection and transaction objects. Given the implied long-running nature of a transaction, you have to support more than one at a time, which means both instances and implementing IDisposable.
using (var connection = new SqlConnection(Configuration.ConnectionString))
{
SqlCommand command = connection.CreateCommand();
SqlTransaction transaction;
connection.Open();
transaction = connection.BeginTransaction("Transaction");
command.Connection = connection;
command.Transaction = transaction;
try
{
if (Parameters.Length > 0)
{
command.Parameters.Clear();
command.Parameters.AddRange(Parameters);
}
command.ExecuteNonQuery();
transaction.Commit();
}
catch (Exception e)
{
try
{
transaction.Rollback();
}
catch (Exception ex2)
{
//trace
}
}
}
The idea : I am trying to run an insertion to 2 databases using 2 different dbContext, the goal is to allow a role back on the insertion from both QBs in case of an exception from ether one of the insertions.
My code:
using (var db1 = new DbContext1())
{
db1.Database.Connection.Open();
using (var trans = db1.Database.Connection.BeginTransaction())
{
//do the insertion into db1
db1.SaveChanges();
using (var db2 = new DbContext2())
{
//do the insertions into db2
db2.SaveChanges();
}
trans.Commit();
}
}
On the first call to save changes: db1.SaveChanges(); I get an invalid operation exception : sqlconnection does not support parallel transactions
I tried figuring out what does it exactly mean, why does it happen and how to solve it but haven't been able to achieve that.
So my questions are:
What does it exactly mean? and why do I get this exception?
How can I solve it?
Is there a way to use the begin transaction is a different way that won't cause this error?
Also, is this the proper way to use begin transaction or should I do something different?
***For clarification, I am using the db1.Database.Connection.Open(); because otherwise I get an "connection is close" error.
Instead of trying to strech your connection and transaction across two DbContext, you may go for handling your connection and transaction outside of your DbContext, something like this :
using (var conn = new System.Data.SqlClient.SqlConnection("yourConnectionString"))
{
conn.Open();
using (var trans = conn.BeginTransaction())
{
try
{
using (var dbc1 = new System.Data.Entity.DbContext(conn, contextOwnsConnection: false))
{
dbc1.Database.UseTransaction(trans);
// do some work
// ...
dbc1.SaveChanges();
}
using (var dbc2 = new System.Data.Entity.DbContext(conn, contextOwnsConnection: false))
{
dbc2.Database.UseTransaction(trans);
// do some work
// ...
dbc2.SaveChanges();
}
trans.Commit();
}
catch
{
trans.Rollback();
}
}
}
I found out that I was simply abusing the syntax, so to help anyone who may stumble upon this question this is the proper way to do this:
using (var db1 = new DbContext1())
{
using (var trans = db1.Database.BeginTransaction())
{
try
{
//do the insertion into db1
db1.SaveChanges();
using (var db2 = new DbContext2())
{
//do the insertions into db2
db2.SaveChanges();
}
trans.Commit();
}
catch (Exception e)
{
trans.Rollback();
}
}
}
I'm using System.Data.SQLite, and I have some code where I'm wrapping a series of commands in a SQLite transaction.
Is it required to set the Transaction property to the transaction instance? It seems as though the command automatically picks it up from the connection.
The reason this is important to me is determining what I have to pass down to helper functions--i.e. just a reference to the connection or to the transaction as well.
Edit:
Here's a simplified example.
private void ExecuteSql(SQLiteConnection conn, IEnumerable<string> sqlStatements)
{
using (var trans = conn.BeginTransaction())
{
try
{
foreach (var sql in sqlStatements) ExecuteSql(conn, sql);
}
catch
{
trans.Rollback();
throw; // pass up to higher level
}
trans.Commit();
}
}
private void ExecuteSql(SQLiteConnection conn, string sqlStatement)
{
using (var cmd = new SQLiteCommand(conn)
{
//Transaction = trans, -- necessary?
CommandText = sqlStatement,
})
{
cmd.ExecuteNonQuery();
}
}
No, you don't need to set the Transaction property manually.
Everything you do with the connection while the transaction is open will be tied to that transaction. However, this might not hold true for other database engines (although in general I think it does), so be careful if ever you change your DB.
I find the ADO.NET interfaces convoluted for no good reason; there's lots of redundancy, and providers often have to jump through hoops to get optimal performance with certain parts of the interface (e.g. the IDataParameter/IDataParameterCollection monstrosity).
UPDATE based on accepted answer:
bool success = false;
using (var bulkCopy = new SqlBulkCopy(connection)) //using!
{
connection.Open();
//explicit isolation level is best-practice
using (var tran = connection.BeginTransaction(IsolationLevel.ReadCommitted))
{
bulkCopy.DestinationTableName = "table";
bulkCopy.ColumnMappings...
using (var dataReader = new ObjectDataReader<SomeObject>(paths))
{
bulkCopy.WriteToServer(dataReader);
success = true;
}
tran.Commit(); //commit, will not be called if exception escapes
}
}
return success;
I use BulkCopy class for large insert and it works fine.
After execute WriteToServer and saving data to database I wan't to know are all data saved successfully so I can return true/false because I need to save all or nothing?
var bulkCopy = new SqlBulkCopy(connection);
bulkCopy.DestinationTableName = "table";
bulkCopy.ColumnMappings...
using (var dataReader = new ObjectDataReader<SomeObject>(paths))
{
try
{
bulkCopy.WriteToServer(dataReader);
}
catch(Exception ex){ ... }
}
If the call to WriteToServer completed without exceptions, all rows were saved and are on disk. This is just the standard semantics for SQL Server DML. Nothing special with bulk copy.
Like all other DML, SqlBulkCopy is all-or-nothing as well. Except if you configure a batch size which you did not.
using (var bulkCopy = new SqlBulkCopy(connection)) //using!
{
connection.Open();
//explicit isolation level is best-practice
using (var tran = connection.BeginTransaction(IsolationLevel.ReadCommitted))
{
bulkCopy.DestinationTableName = "table";
bulkCopy.ColumnMappings...
using (var dataReader = new ObjectDataReader<SomeObject>(paths))
{
//try
//{
bulkCopy.WriteToServer(dataReader, /*here you set some options*/);
//}
//catch(Exception ex){ ... } //you need no explicit try-catch here
}
tran.Commit(); //commit, will not be called if exception escapes
}
}
I've added you sample code that I aligned with best-practices.
There is no direct way of identifying if the process was completed successfully or not, other than to look for/catch any exceptions raised by WriteToServer() method.
An alternative approach might be to check the number of records in the database, and then check the number of records after the process completes - The difference being the number that were inserted. Comparing this value against the numbers of records to be inserted could give an idea of failure or success. However, this is not fool proof particularly if there are other process inserting/deleting records.
However, these techniques in conjunction with TransactionScope - http://msdn.microsoft.com/en-us/library/system.transactions.transactionscope.aspx - or something similar should achieve what you require.
EDIT
By Default each insert operation is processed as a batch; if the operation fails in a particular batch then that batch is rolled back, not any inserted before it.
However, If an internal transaction is applied to the bulk operation than a failure in any row can roll back the entire result set. For Example;
using (SqlBulkCopy bulkCopy =
new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity
| SqlBulkCopyOptions.UseInternalTransaction))
{
bulkCopy.BatchSize = 10;
bulkCopy.DestinationTableName = "dbo.BulkCopyDemoMatchingColumns";
try
{
bulkCopy.WriteToServer(reader);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
finally
{
bulkCopy.Close();
}
}
An error in any of the above operation would cause the entire operation to roll back. See more details on this at http://msdn.microsoft.com/en-us/library/tchktcdk.aspx.
From the docs for this function, the following snippet suggests that you should catch any exceptions that are thrown and otherwise you can take it that the operation was successful.
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(connectionString))
{
bulkCopy.DestinationTableName = "dbo.BulkCopyDemoMatchingColumns";
try
{
// Write from the source to the destination.
bulkCopy.WriteToServer(reader);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
finally
{
// Close the SqlDataReader. The SqlBulkCopy
// object is automatically closed at the end
// of the using block.
reader.Close();
}
}
If you want to be super-sure, execute a query against the database to check the rows are there, after the bulk copy completes.
Here is the current architecture of my transaction scope source code. The third insert throws an .NET exception (Not a SQL Exception) and it is not rolling back the two previous insert statements. What I am doing wrong?
EDIT: I removed the try/catch from insert2 and insert3. I also removed the exception handling utility from the insert1 try/catch and put "throw ex". It still does not rollback the transaction.
EDIT 2: I added the try/catch back on the Insert3 method and just put a "throw" in the catch statement. It still does not rollback the transaction.
UPDATE:Based on the feedback I received, the "SqlHelper" class is using the SqlConnection object to establish a connection to the database, then creates a SqlCommand object, set the CommandType property to "StoredProcedure" and calls the ExecuteNonQuery method of the SqlCommand.
I also did not add Transaction Binding=Explicit Unbind to the current connection string. I will add that during my next test.
public void InsertStuff()
{
try
{
using(TransactionScope ts = new TransactionScope())
{
//perform insert 1
using(SqlHelper sh = new SqlHelper())
{
SqlParameter[] sp = { /* create parameters for first insert */ };
sh.Insert("MyInsert1", sp);
}
//perform insert 2
this.Insert2();
//perform insert 3 - breaks here!!!!!
this.Insert3();
ts.Complete();
}
}
catch(Exception ex)
{
throw ex;
}
}
public void Insert2()
{
//perform insert 2
using(SqlHelper sh = new SqlHelper())
{
SqlParameter[] sp = { /* create parameters for second insert */ };
sh.Insert("MyInsert2", sp);
}
}
public void Insert3()
{
//perform insert 3
using(SqlHelper sh = new SqlHelper())
{
SqlParameter[] sp = { /*create parameters for third insert */ };
sh.Insert("MyInsert3", sp);
}
}
I have also run into a similar issue. My problem occurred because the SqlConnection I used in my SqlCommands was already open before the TransactionScope was created, so it never got enlisted in the TransactionScope as a transaction.
Is it possible that the SqlHelper class is reusing an instance of SqlConnection that is open before you enter your TransactionScope block?
It looks like you are catching the exception in Insert3() so your code continues after the call. If you want it to rollback you'll need to let the exception bubble up to the try/catch block in the main routine so that the ts.Complete() statement never gets called.
An implicit rollback will only occur if the using is exited without calling ts.complete. Because you are handling the exception in Insert3() the exception never causes an the using statement to exit.
Either rethrow the exception or notify the caller that a rollback is needed (make change the signature of Insert3() to bool Insert3()?)
(based on the edited version that doesn't swallow exceptions)
How long do the operations take? If any of them are very long running, it is possible that the Transaction Binding bug feature has bitten you - i.e. the connection has become detached. Try adding Transaction Binding=Explicit Unbind to the connection string.
I dont see your helper class, but transaction scope rollsback if you don't call complete statement even if you get error from .NET code. I copied one example for you. You may be doing something wrong in debugging. This example has error in .net code and similar catch block as yours.
private static readonly string _connectionString = ConnectionString.GetDbConnection();
private const string inserttStr = #"INSERT INTO dbo.testTable (col1) VALUES(#test);";
/// <summary>
/// Execute command on DBMS.
/// </summary>
/// <param name="command">Command to execute.</param>
private void ExecuteNonQuery(IDbCommand command)
{
if (command == null)
throw new ArgumentNullException("Parameter 'command' can't be null!");
using (IDbConnection connection = new SqlConnection(_connectionString))
{
command.Connection = connection;
connection.Open();
command.ExecuteNonQuery();
}
}
public void FirstMethod()
{
IDbCommand command = new SqlCommand(inserttStr);
command.Parameters.Add(new SqlParameter("#test", "Hello1"));
ExecuteNonQuery(command);
}
public void SecondMethod()
{
IDbCommand command = new SqlCommand(inserttStr);
command.Parameters.Add(new SqlParameter("#test", "Hello2"));
ExecuteNonQuery(command);
}
public void ThirdMethodCauseNetException()
{
IDbCommand command = new SqlCommand(inserttStr);
command.Parameters.Add(new SqlParameter("#test", "Hello3"));
ExecuteNonQuery(command);
int a = 0;
int b = 1/a;
}
public void MainWrap()
{
TransactionOptions tso = new TransactionOptions();
tso.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted;
//TransactionScopeOption.Required, tso
try
{
using (TransactionScope sc = new TransactionScope())
{
FirstMethod();
SecondMethod();
ThirdMethodCauseNetException();
sc.Complete();
}
}
catch (Exception ex)
{
logger.ErrorException("eee ",ex);
}
}
If you want to debug your transactions, you can use this script to see locks and waiting status etc.
SELECT
request_session_id AS spid,
CASE transaction_isolation_level
WHEN 0 THEN 'Unspecified'
WHEN 1 THEN 'ReadUncomitted'
WHEN 2 THEN 'Readcomitted'
WHEN 3 THEN 'Repeatable'
WHEN 4 THEN 'Serializable'
WHEN 5 THEN 'Snapshot' END AS TRANSACTION_ISOLATION_LEVEL ,
resource_type AS restype,
resource_database_id AS dbid,
DB_NAME(resource_database_id) as DBNAME,
resource_description AS res,
resource_associated_entity_id AS resid,
CASE
when resource_type = 'OBJECT' then OBJECT_NAME( resource_associated_entity_id)
ELSE 'N/A'
END as ObjectName,
request_mode AS mode,
request_status AS status
FROM sys.dm_tran_locks l
left join sys.dm_exec_sessions s on l.request_session_id = s.session_id
where resource_database_id = 24
order by spid, restype, dbname;
You will see one SPID for two method calls before calling exception method.
Default isolation level is serializable.You can read more about locks and transactions here
I ran into a similar issue when I had a call to a WCF service operation in TransactionScope.
I noticed transaction flow was not allowed due to the 'TransactionFlow' attribute in the service interface. Therefore, the WCF service operation was not using the transaction used by the outer transaction scope. Changing it to allow transaction flow as shown below fixed my problem.
[TransactionFlow(TransactionFlowOption.NotAllowed)]
to
[TransactionFlow(TransactionFlowOption.Allowed)]