Using a try-catch structure i'm trying to figure what to do if an exception is caught in any point of the transaction. Below one sample of code:
try
{
DbContext.ExecuteSqlCommand("BEGIN TRANSACTION"); //Line 1
DBContext.ExecuteSqlCommand("Some Insertion/Deletion Goes Here"); //Line 2
DbContext.ExecuteSqlCommand("COMMIT"); //Line 3
}
catch(Exception)
{
}
If the expection was caught executing 'Line 1' nothing must be done besides alerting the error. If it was caught executing the second line i don't know if i need to try to rollback the transaction that was sucessfully opened and the same occurs in case something went wrong with the third line.
Should i just send a rollback anyway? Or send all the commands straight to the bank in a single method call?
Inside the try-catch there's a loop performing many transactions like the one in the sample (and i need lots of small transactions instead of just a big one so i can reuse the SQL's '_log' file properly and avoid it to grow unnecessarily).
If any of the transactions go wrong i'll just need to delete them all and inform what happen't, but i can't turn that into one big transaction and just use rollback otherwise it will make the log file grow up to 40GB.
Think this will help:
using (var ctx = new MyDbContext())
{
// begin a transaction in EF – note: this returns a DbContextTransaction object
// and will open the underlying database connection if necessary
using (var dbCtxTxn = ctx.Database.BeginTransaction())
{
try
{
// use DbContext as normal - query, update, call SaveChanges() etc. E.g.:
ctx.Database.ExecuteSqlCommand(
#"UPDATE MyEntity SET Processed = ‘Done’ "
+ "WHERE LastUpdated < ‘2013-03-05T16:43:00’");
var myNewEntity = new MyEntity() { Text = #"My New Entity" };
ctx.MyEntities.Add(myNewEntity);
ctx.SaveChanges();
dbCtxTxn.Commit();
}
catch (Exception e)
{
dbCtxTxn.Rollback();
}
} // if DbContextTransaction opened the connection then it will close it here
}
taken from: https://entityframework.codeplex.com/wikipage?title=Improved%20Transaction%20Support
Basically the idea of it is your transaction becomes part of the using block, and within that you have a try/catch with the actual sql. If anything fails within the try/catch, it will be rolled back
As of Entity Framework 6, ExecuteSqlCommand is wrapped with its own transaction as explained here: http://msdn.microsoft.com/en-gb/data/dn456843.aspx
Unless you explicitly need to roll multiple sql commands into a single transaction, there is no need to explicitly begin a new transaction scope.
With respect to transaction log growth and assuming you are targeting Sql Server then setting the transaction log operation to simple will ensure the log gets recycled between checkpoints.
Obviously if the transaction log history is not being maintained across the entire import, there is no implicit mechanism to rollback all the data in case of failure. Keeping it simple, I would probably just add a 'created' datetime field to the table and delete from the table based on a filter to the created field if I needed to delete all rows in case of error.
Related
I need to know if my transaction scope was successful or not. As in if the records were able to be saved in the Database or not.
Note: I am having this scope in the Service layer, and I do not wish to include a Try-Catch block.
bool txExecuted;
using (var tx = new TransactionScope())
{
//code
// 1 SAVING RECORDS IN DB
// 2 SAVING RECORDS IN DB
tx.Complete();
txExecuted = true;
}
if (txExecuted ) {
// SAVED SUCCESSFULLY
} else {
// NOT SAVED. FAILED
}
The commented code will be doing updates, and will probably be implemented using ExecuteNonQuery() - this returns an int of the number of rows affected. Keep track of all the return values to know how many rows were affected.
The transaction as a whole will either succeed or experience an exception when it completes. If no exception is encountered, the transaction was successful. If an exception occurs, some part of the transaction failed; none of it took place
By considering these two facts (records affected count, transaction exception or no) you can know if the save worked and how many rows were affected
I didn't quite understand the purpose of txExecuted- if an exception occurs it will never be set and the if will never be considered. The only code that will thus run is the stuff inside if(true). I don't see how you can decide to not use a try/catch and hope to do anything useful with a system that is geared to throw an exception if something goes wrong; you saying you don't want to catch exceptions isn't going to stop them happening and affecting the control flow of your program
To be clear, calling the Complete() method is only an indication that all operations within the scope are completed successfully.
However, you should also note that calling this method does not
guarantee a commit of the transaction. It is merely a way of informing
the transaction manager of your status. After calling this method, you
can no longer access the ambient transaction via the Current property,
and trying to do so results in an exception being thrown.
The actual work of commit between the resources manager happens at the
End Using statement if the TransactionScope object created the
transaction.
Since you are using ADO.NET, ExecuteNonQuery will return the number of rows affected. You can do a database lookup after the commit and outside of the using block.
In my opinion, its a mistake not to have a try/catch. You want to catch the TransactionAbortedException log the exception.
try
{
using (var scope = new TransactionScope())
{
using (var conn = new SqlConnection("connection string"))
{
}
// The Complete method commits the transaction. If an exception has been thrown,
// Complete is not called and the transaction is rolled back.
scope.Complete();
}
}
catch (TransactionAbortedException ex)
{
// log
}
Background
We are trying to archive old user data to keep our most common tables smaller.
Issue
Normal EF code for removing records works for our custom tables. The AspNetUsers table is a different story. It appears that the way to do it is using _userManager.Delete or _userManager.DeleteAsync. These work without trying to do multiple db calls in one transaction. When I wrap this in a transactionScope, it times out. Here is an example:
public bool DeleteByMultipleIds(List<string> idsToRemove)
{
try
{
using (var scope = new TransactionScope())
{
foreach (var id in idsToRemove)
{
var user = _userManager.FindById(id);
//copy user data to archive table
_userManager.Delete(user);//causes timeout
}
scope.Complete();
}
return true;
}
catch (TransactionAbortedException e)
{
Logger.Publish(e);
return false;
}
catch (Exception e)
{
Logger.Publish(e);
return false;
}
}
Note that while the code is running and I call straight to the DB like:
DELETE
FROM ASPNETUSERS
WHERE Id = 'X'
It will also time out. This SQL works before the the C# code is executed. Therefore, it appears that more than 1 db hit seems to lock the table. How can I find the user(db hit #1) and delete the user (db hit #2) in one transaction?
For me, the problem involved the use of multiple separate DbContexts within the same transaction. The BeginTransaction() approach did not work.
Internally, UserManager.Delete() is calling an async method in a RunSync() wrapper. Therefore, using the TransactionScopeAsyncFlowOption.Enabled parameter for my TransactionScope did work:
using (var scope = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))
{
_myContext1.Delete(organisation);
_myContext2.Delete(orders);
_userManager.Delete(user);
scope.Complete();
}
Advice from microsoft is to use a different API when doing transactions with EF. This is due to the interactions between EF and the TransactionScope class. Implicitly transaction scope is forcing things up to serializable, which causes a deadlock.
Good description of an EF internal API is here: MSDN Link
For reference you may need to look into user manager if it exposes the datacontext and replace your Transaction scope with using(var dbContextTransaction = context.Database.BeginTransaction()) { //code }
Alternatively, looking at your scenario, you are actually quite safe in finding the user ID, then trying to delete it and then just catching an error if the user has been deleted in the fraction of a second between finding it and deleting it.
I've been looking into transactions for two days now, perhaps I'm missing something obvious after taking in so much information. The goal here is to block simultaneous request. If condition is true, data is inserted, after which condition will be false. Simultaneous requests will both check condition before data could be inserted and then will both try to insert data.
public async Task<ActionResult> Foo(Guid ID)
{
Debug.WriteLine("entering transaction scope");
using (var transaction = new TransactionScope(
TransactionScopeOption.Required,
new TransactionOptions() { IsolationLevel = IsolationLevel.Serializable },
TransactionScopeAsyncFlowOption.Enabled
))
{
Debug.WriteLine("entered transaction scope");
var context = new DbContext();
Debug.WriteLine("querying");
var foo = context.Foos.FirstOrDefault(/* condition */);
Debug.WriteLine("done querying");
context.Foos.Add(new Foo());
/* async work here */
Debug.WriteLine("saving");
context.SaveChanges();
Debug.WriteLine("saved");
Debug.WriteLine("exiting transaction scope");
transaction.Complete();
Debug.WriteLine("exited transaction scope");
return View();
}
}
This is the debug output when executing two requests at once with Fiddler:
entering transaction scope
entered transaction scope
querying
done querying
entering transaction scope
entered transaction scope
querying
done querying
saving
saving
saved
A first chance exception of type 'System.Data.SqlClient.SqlException' occurred in System.Data.dll
exiting transaction scope
exited transaction scope
This is my understanding of how the code is supposed to work:
A transaction with serialisable isolation level is required, to prevent phantom reads.
I cannot use DbContext.Database.Connection.BeginTransaction, when executing a query an error is thrown:
ExecuteReader requires the command to have a transaction when the connection assigned to the command is in a pending local transaction.
The Transaction property of the command has not been initialized.
TransactionScope normally causes causes problems when using task-based async.
http://entityframework.codeplex.com/discussions/429215
However, .NET 4.5.1 adds additional constructors to deal with this.
How to dispose TransactionScope in cancelable async/await?
EF 6 has improved transaction support, but I'm still on EF 5.
http://msdn.microsoft.com/en-us/data/dn456843.aspx
So obviously it's not working like I want it to. Is it possible to achieve my goal using TransactionScope? Or will I have to do an upgrade to EF 6?
Serialization does not solve the old 'check then insert' problem. Two serializable transactions can concurrently evaluate the condition, conclude that they have to insert, then both attempt to insert only for one to fail and one to succeed. This is because all reads are compatible with each other under serializable isolation (in fact they are compatible under all isolation levels).
There are many schools of thought how to solve this problem. Some recommend using MERGE. Some recommend using a lock hint in the check query to acquire an X or U lock instead. Personally I recommend always INSERT and gracefully recover the duplicate key violation. Another approach that does work is using explicit app locks.
EF or System.Transactions really add only noise tot he question. This is fundamentally a back-end SQL problem. As for the problem of how to flow a transaction scope between threads see Get TransactionScope to work with async / await (obviously, you already know this, from reading the OP... I didn't register in first read). You will need this to get your async code to enlist in the proper context, but the blocking/locking is fundamental back end problem, still.
I've got a doubt about transactionscope because I'd like to make a transactional operation where first I perform some CRUD operations (a transaction which inserts and updates some rows on the DataBase) and I got a result from the whole transaction (an XML).
After I got the XML I send the XML to a Web Service which my customer exposes to integrate my system with.
The point is, let's imagine that one day the WS that my customer exposes falls down due to a weekly or monthly support task that its IT Area perform, so everymoment I perform the whole thing It performs the DB operation but of course It will throw an exception at the moment that I try to call the WS.
After Searching on the Internet I started to think of Transaction Scope. My Data Access Method which is on my Data Access Layer already has a TransactionScope where I perform insert, update, delete, etc.
The following Code is what I'd like to try:
public void ProcessSomething()
{
using (TransactionScope mainScope = new TransactionScope())
{
FooDAL dl = new FooDAL();
string message = dl.ProcessTransaction();
WSClientFoo client = new WSClientFoo();
client.SendTransactionMessage(message);
mainScope.Complete();
}
}
public class FooDAL
{
public string ProcessTransaction()
{
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required, new TransactionOptions(){ IsolationLevel = IsolationLevel.ReadCommitted}))
{
///Do Insert, Update, Delete and According to the Operation Generates a message
scope.Complete();
}
return transactionMessage;
}
}
The question is, is it correct to use TransactionScope to handle what I want to do ?
Thanks a lot for your time :)
TransactionScopeOption.Required in your FooDAL.ProcessTransaction method means in fact: if there is a transaction available, reuse it in this scope; otherwise, create a new one.
So in short: yes, this is the correct way of doing this.
But be advised that if you don't call scope.Complete() in FooDAL.ProcessTransaction, a call to mainScope.Complete() will crash with a 'TransactionAbortedException' or something like that, which makes sense: if a nested scope decides that the transaction cannot be committed the outer scope should not be able to commit it.
The following code throws a TransactionAbortedException with message "The transaction has aborted" and an inner TransactionPromotionException with message "Failure while attempting to promote transaction":
using ( TransactionScope transactionScope = new TransactionScope() )
{
try
{
using ( MyDataContext context = new MyDataContext() )
{
Guid accountID = new Guid( Request.QueryString[ "aid" ] );
Account account = ( from a in context.Accounts where a.UniqueID.Equals( accountID ) select a ).SingleOrDefault();
IQueryable < My_Data_Access_Layer.Login > loginList = from l in context.Logins where l.AccountID == account.AccountID select l;
foreach ( My_Data_Access_Layer.Login login in loginList )
{
MembershipUser membershipUser = Membership.GetUser( login.UniqueID );
}
[... lots of DeleteAllOnSubmit() calls]
context.SubmitChanges();
transactionScope.Complete();
}
}
catch ( Exception E )
{
[... reports the exception ...]
}
}
The error occurs at the call to Membership.GetUser().
My Connection String is:
<add name="MyConnectionString" connectionString="Data Source=localhost\SQLEXPRESS;Initial Catalog=MyDatabase;Integrated Security=True"
providerName="System.Data.SqlClient" />
Everything I've read tells me that TransactionScope should just get magically applied to the Membership calls. The user exists (I'd expect a null return otherwise.)
The TransactionScope class masks exceptions. Most likely what's happening is that something inside that scope is failing (throwing an exception), and the TransactionAbortedException is simply a side-effect that occurs when control exits the using block.
Try wrapping everything inside the TransactionScope in a try-catch block, with a rethrow inside the catch, and set a breakpoint there; you should be able to see what the real error is.
One other thing, TransactionScope.Complete should be the last statement executed before the end of the using block containing the TransactionScope. In this case you should probably be alright, since you're not actually doing any work afterward, but putting the call to Complete inside an inner scope tends to make for more bug-prone code.
Update:
Now that we know what the inner exception is (failure promoting transaction), it's more clear what's going on.
The problem is that inside the TransactionScope, you are actually opening up another database connection with GetUser. The membership provider doesn't know how to re-use the DataContext you already have open; it has to open its own connection, and when the TransactionScope sees this, it tries to promote to a distributed transaction.
It's failing because you probably have MSDTC disabled on either the web server, the database server, or both.
There's no way to avoid the distributed transaction if you are going to be opening two separate connections, so there are really a few ways around this issue:
Move the GetUser calls outside the TransactionScope. That is, "read" the users first from the membership provider into a list, then start the transaction when you actually need to start making modifications.
Remove the GetUser calls altogether and read the user information directly from the database, on the same DataContext or at least the same connection.
Enable DTC on all servers participating in the transaction (performance will be impacted when a transaction promotes).
I think that option #1 is going to be the best in this scenario; it's very unlikely that the data you need to read from the membership provider will be changed between the time you read it and the time you begin the transaction.
At one level, it is correct; the transaction is always aborted (you aren't calling Complete()). Is that the exact code?
Additionally, having the DataContext outside the TransactionScope makes me suspect that it might be doing some odd things since the transaction isn't there when the data-context first gets created. Have you tried (both of):
reversing the creation order, so the TransactionScope spans the DataContext
calling Complete
?
using ( TransactionScope transactionScope = new TransactionScope() )
using ( MyDataContext context = new MyDataContext() )
{
/* ... */
transactionScope.Complete();
}