The idea : I am trying to run an insertion to 2 databases using 2 different dbContext, the goal is to allow a role back on the insertion from both QBs in case of an exception from ether one of the insertions.
My code:
using (var db1 = new DbContext1())
{
db1.Database.Connection.Open();
using (var trans = db1.Database.Connection.BeginTransaction())
{
//do the insertion into db1
db1.SaveChanges();
using (var db2 = new DbContext2())
{
//do the insertions into db2
db2.SaveChanges();
}
trans.Commit();
}
}
On the first call to save changes: db1.SaveChanges(); I get an invalid operation exception : sqlconnection does not support parallel transactions
I tried figuring out what does it exactly mean, why does it happen and how to solve it but haven't been able to achieve that.
So my questions are:
What does it exactly mean? and why do I get this exception?
How can I solve it?
Is there a way to use the begin transaction is a different way that won't cause this error?
Also, is this the proper way to use begin transaction or should I do something different?
***For clarification, I am using the db1.Database.Connection.Open(); because otherwise I get an "connection is close" error.
Instead of trying to strech your connection and transaction across two DbContext, you may go for handling your connection and transaction outside of your DbContext, something like this :
using (var conn = new System.Data.SqlClient.SqlConnection("yourConnectionString"))
{
conn.Open();
using (var trans = conn.BeginTransaction())
{
try
{
using (var dbc1 = new System.Data.Entity.DbContext(conn, contextOwnsConnection: false))
{
dbc1.Database.UseTransaction(trans);
// do some work
// ...
dbc1.SaveChanges();
}
using (var dbc2 = new System.Data.Entity.DbContext(conn, contextOwnsConnection: false))
{
dbc2.Database.UseTransaction(trans);
// do some work
// ...
dbc2.SaveChanges();
}
trans.Commit();
}
catch
{
trans.Rollback();
}
}
}
I found out that I was simply abusing the syntax, so to help anyone who may stumble upon this question this is the proper way to do this:
using (var db1 = new DbContext1())
{
using (var trans = db1.Database.BeginTransaction())
{
try
{
//do the insertion into db1
db1.SaveChanges();
using (var db2 = new DbContext2())
{
//do the insertions into db2
db2.SaveChanges();
}
trans.Commit();
}
catch (Exception e)
{
trans.Rollback();
}
}
}
Related
I read a lot of articles on Transaction but I want to build my own example of nested Transactions to see how it really works in c#. I already have a good idea about them in SQl but C# is giving me a tough time. SO I came here for an example that can explain how Nested Transactions work.
I tried the following code to check if the inner transaction will be committed or not. Since the TransactionScope property is RequiresNew the inner transaction should execute but I introduced a deliberate Unique key violation in outer Transaction and my inner transaction didn't execute. Why? Is my concept messed up?
Database _Cataloguedatabase = DatabaseFactory.CreateDatabase();
public void TransferAmountSuppress(Account a)
{
var option = new TransactionOptions();
option.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted;
try
{
using (TransactionScope outerScope = new TransactionScope(TransactionScopeOption.RequiresNew, option))
{
using (DbCommand cmd = _Cataloguedatabase.GetStoredProcCommand(SpUpdateCredit))
{
_Cataloguedatabase.AddInParameter(cmd, CreditAmountParameter, DbType.String, a.Amount);
_Cataloguedatabase.AddInParameter(cmd, CodeParameter, DbType.String, a.Code);
_Cataloguedatabase.ExecuteNonQuery(cmd);
}
using (TransactionScope innerScope = new TransactionScope(TransactionScopeOption.RequiresNew, option))
{
using (DbCommand cmd = _Cataloguedatabase.GetStoredProcCommand(SpUpdateDebit))
{
_Cataloguedatabase.AddInParameter(cmd, DebitAmountParameter, DbType.String, a.Amount);
_Cataloguedatabase.ExecuteNonQuery(cmd);
}
innerScope.Complete();
}
outerScope.Complete();
}
}
catch (Exception ex)
{
throw new FaultException(new FaultReason(new FaultReasonText(ex.Message)));
}
}
I'm using nested transactions by using IDBConecction interface on c#. I have to methods that insert data into 2 different tables, but when it comes to the second insert the first insert transaction locks the second one causing a timeout exception.
public void FirstInsert()
{
using (var cn = new Connection().GetConnection())
{
cn.Open();
using (var tran = cn.BeginTransaction())
{
try
{
//1st insert
SecondInsert() //calling second insert method
tran.Commit();
}
catch
{
tran.Rollback();
}
}
}
}
public void SecondInsert()
{
using (var cn = new Connection().GetConnection())
{
cn.Open();
using (var tran = cn.BeginTransaction())
{
try
{
//2nd insert, this one fails
tran.Commit();
}
catch
{
tran.Rollback();
}
}
}
}
When I check on SqlServer fisrt insert has the SPID 56, then when the second insert is being performed with SPID 57, and I use
exec sp_who2
In the column "BlkBy" for SPID 57 it says it is blocked by SPID 56.
How can I overcome these problem?
Use one connection for both operations. This likely involves passing the connection object around.
Usually, the connection+transaction per request pattern solves this issue well. Opening a connection in all kinds of methods is a code smell. It shows that the infrastructure fails to handle that.
You are doing correct but there is no need of a separate connection object and transaction object in your second method since call to the secondinsert() is already inside transaction scope. Your code can simply be
public void FirstInsert(){
using (var cn = new Connection().GetConnection()){
cn.Open();
using (var tran = cn.BeginTransaction()){
try{
//1st insert
SecondInsert() //calling second insert method
tran.Commit();
}
catch{
tran.Rollback();
}
}
}
}
public void SecondInsert(){
//perform second insert operation
}
}
}
I am inserting values into two tables using two stored procedure , and the data in both tables are linked to each other so i want if any error occurs in second stored procedure ,the data entered via 1st stored procedure should get roll backed.
I am using Sql server 2008 as my back end and ASP.net (c#) as front end
use need to use TransactionScope as below
using(var tran = new TransactionScope())
{
//calling stored procedures here
tran.Complete();
}
when an exception occurs the control will go out from the using and thus transaction will rollback
if you are using entity framework you can use it.
using (var dataContext = new SchoolMSDbContext())
{
using (var trans = dataContext.Database.BeginTransaction(IsolationLevel.ReadCommitted))
{
try
{
// your query
trans.Commit();
}
catch (Exception ex)
{
trans.Rollback();
Console.WriteLine(ex.InnerException);
}
}
}
Or you can try this
using (var dataContext = new SchoolMSDbContext())
{
using (var transaction = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))
{
try
{
//your query
transaction.Complete();
}
catch (Exception ex)
{
transaction.Dispose();
Console.WriteLine(ex.InnerException);
}
}
}
for this you'll have to this references.
System.Transactions
for more information check this links
https://msdn.microsoft.com/en-us/data/dn456843.aspx
https://msdn.microsoft.com/en-us/library/2k2hy99x(v=vs.110).aspx
I have below method
public void UpdateQuantity()
{
Sql ss = new Sql();
M3 m3 = new M3();
TransactionOptions ff = new TransactionOptions();
ff.IsolationLevel = IsolationLevel.ReadUncommitted;
using (TransactionScope dd = new TransactionScope(TransactionScopeOption.Required, ff))
{
try
{
ss.AddRegion("ALFKI", "SES1"); //step 1
m3.UpdateAnotherSystem(); //step2
dd.Complete();
}
catch (Exception)
{
}
}
}
public void AddRegion(string customerName, string Deception)
{
using (NorthWind context = new NorthWind())
{
Region rr = new Region();
rr.RegionID = 5;
rr.RegionDescription = "Ssaman";
context.Regions.Add(rr);
try
{
context.SaveChanges();
}
catch (Exception)
{
throw;
}
}
}
In that first im going to update Sql server data base .After that im going to perform another update on other system.If step2 fails(may be network failure) then i need to reverse step 1.There for i put two method calls inside the transactionscope. I'm use entity framework to work with sql.Entity framework always set the transaction isolation level as read committed(according to the sql profiler).
but my problem is after context.SaveChanges() called my target table is locked till transaction completes(dd.Complete()).
Are there are any way to change entity framework transaction isolation level?(My entity framework version is 5).
SQL Server does not release locks that were taken due to writes until the end of the transaction. This is so that writes can be rolled back. You cannot do anything about this.
End your transaction or live with the fact that the rows written are still in use. Normally, this is not a problem. You should probably have a single context, connection and transaction for most work that happens in an HTTP request or WCF request. Transactions do not block on themselves.
using (var context = new BloggingContext())
{
using (var dbContextTransaction = context.Database.BeginTransaction())
{
try
{
context.Database.ExecuteSqlCommand(
#"UPDATE Blogs SET Rating = 5" +
" WHERE Name LIKE '%Entity Framework%'"
);
var query = context.Posts.Where(p => p.Blog.Rating >= 5);
foreach (var post in query)
{
post.Title += "[Cool Blog]";
}
context.SaveChanges();
dbContextTransaction.Commit();
}
catch (Exception)
{
dbContextTransaction.Rollback();
}
}
}
UPDATE based on accepted answer:
bool success = false;
using (var bulkCopy = new SqlBulkCopy(connection)) //using!
{
connection.Open();
//explicit isolation level is best-practice
using (var tran = connection.BeginTransaction(IsolationLevel.ReadCommitted))
{
bulkCopy.DestinationTableName = "table";
bulkCopy.ColumnMappings...
using (var dataReader = new ObjectDataReader<SomeObject>(paths))
{
bulkCopy.WriteToServer(dataReader);
success = true;
}
tran.Commit(); //commit, will not be called if exception escapes
}
}
return success;
I use BulkCopy class for large insert and it works fine.
After execute WriteToServer and saving data to database I wan't to know are all data saved successfully so I can return true/false because I need to save all or nothing?
var bulkCopy = new SqlBulkCopy(connection);
bulkCopy.DestinationTableName = "table";
bulkCopy.ColumnMappings...
using (var dataReader = new ObjectDataReader<SomeObject>(paths))
{
try
{
bulkCopy.WriteToServer(dataReader);
}
catch(Exception ex){ ... }
}
If the call to WriteToServer completed without exceptions, all rows were saved and are on disk. This is just the standard semantics for SQL Server DML. Nothing special with bulk copy.
Like all other DML, SqlBulkCopy is all-or-nothing as well. Except if you configure a batch size which you did not.
using (var bulkCopy = new SqlBulkCopy(connection)) //using!
{
connection.Open();
//explicit isolation level is best-practice
using (var tran = connection.BeginTransaction(IsolationLevel.ReadCommitted))
{
bulkCopy.DestinationTableName = "table";
bulkCopy.ColumnMappings...
using (var dataReader = new ObjectDataReader<SomeObject>(paths))
{
//try
//{
bulkCopy.WriteToServer(dataReader, /*here you set some options*/);
//}
//catch(Exception ex){ ... } //you need no explicit try-catch here
}
tran.Commit(); //commit, will not be called if exception escapes
}
}
I've added you sample code that I aligned with best-practices.
There is no direct way of identifying if the process was completed successfully or not, other than to look for/catch any exceptions raised by WriteToServer() method.
An alternative approach might be to check the number of records in the database, and then check the number of records after the process completes - The difference being the number that were inserted. Comparing this value against the numbers of records to be inserted could give an idea of failure or success. However, this is not fool proof particularly if there are other process inserting/deleting records.
However, these techniques in conjunction with TransactionScope - http://msdn.microsoft.com/en-us/library/system.transactions.transactionscope.aspx - or something similar should achieve what you require.
EDIT
By Default each insert operation is processed as a batch; if the operation fails in a particular batch then that batch is rolled back, not any inserted before it.
However, If an internal transaction is applied to the bulk operation than a failure in any row can roll back the entire result set. For Example;
using (SqlBulkCopy bulkCopy =
new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity
| SqlBulkCopyOptions.UseInternalTransaction))
{
bulkCopy.BatchSize = 10;
bulkCopy.DestinationTableName = "dbo.BulkCopyDemoMatchingColumns";
try
{
bulkCopy.WriteToServer(reader);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
finally
{
bulkCopy.Close();
}
}
An error in any of the above operation would cause the entire operation to roll back. See more details on this at http://msdn.microsoft.com/en-us/library/tchktcdk.aspx.
From the docs for this function, the following snippet suggests that you should catch any exceptions that are thrown and otherwise you can take it that the operation was successful.
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(connectionString))
{
bulkCopy.DestinationTableName = "dbo.BulkCopyDemoMatchingColumns";
try
{
// Write from the source to the destination.
bulkCopy.WriteToServer(reader);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
finally
{
// Close the SqlDataReader. The SqlBulkCopy
// object is automatically closed at the end
// of the using block.
reader.Close();
}
}
If you want to be super-sure, execute a query against the database to check the rows are there, after the bulk copy completes.