Is there a way to know in a wcf operation that a transaction has committed?
Ok, second attempt into being more specific.
I got a WCF service with an Operation with Transaction flow allow.
Now when a client call my wcf service it can have a transaction. But my service is also interested in the fact that the transaction on the client has succeeded. Because on my wcf service level, if everything went well. It has other things to do, but only if all transactions has been committed....
Is there like an event I can subscribe to or something?
It depends on the service itself and how you are handling transactions. If you are engaging in transactions in WCF through WS-Transaction then if the call to the client succeeds without exception, you can assume the transaction took place.
However, if this is in the context of another transaction, then you can't be sure if the transaction went through until the containing transaction is completed.
Even if you are using the TransactionScope class, if you have the service enabled to use transactions, you still have to take into account the encompassing transaction (if there is one).
You will have to provide more information about where the transaction is in relation to the call in order for a more complete answer.
Try using the operation behavior attribute above, in your operation that allows TransactionFlow:
[OperationBehavior(TransactionScopeRequired=true)]
If a transaction flows from the client, then the service will use it.
bool isTransactionComplete = true;
try
{
using (TransactionScope trScope = new TransactionScope(TransactionScopeOption.Required))
{
//some work
trScope.Complete();
}
}
catch (TransactionAbortedException e)
{
//Transaction holder got exception from some service
//and canceled transaction
isTransactionComplete = false;
}
catch//other exception
{
isTransactionComplete = false;
throw;
}
if (isTransactionComplete)
{
//Success
}
As casperOne wrote it depends on the settings. But you should be aware of complex transactions like
1) session service and simultaneous transactions for one service instance
2) transaction inside transaction
Related
The following code is part of my business layer:
public void IncrementHits(int ID)
{
using (var context = new MyEntities())
{
using (TransactionScope transaction = new TransactionScope())
{
Models.User userItem = context.User.First(x => x.IDUser == ID);
userItem.Hits++;
try
{
context.SaveChanges();
transaction.Complete();
}
catch (Exception ex)
{
transaction.Dispose();
throw;
}
}
}
}
Sometimes (once or twice a week) I get a TransactionInDoubtException. Stacktrace:
at System.Transactions.TransactionStateInDoubt.EndCommit(InternalTransaction tx)
at System.Transactions.CommittableTransaction.Commit()
at System.Transactions.TransactionScope.InternalDispose()
at System.Transactions.TransactionScope.Dispose()
As far as I know, the default isolation level is serializable, so there should be no problem with this atomic operation. (Assuming there is no timeout occuring because of a write lock)
How can I fix my problem?
Use transaction.Rollback instead of transaction.Dispose
If you have a transaction in a pending state always rollback on exception.
msdn says "if the transaction manager loses contact with the subordinate participant after sending the Single-Phase Commit request but before receiving an outcome notification, it has no reliable mechanism for recovering the actual outcome of the transaction. Consequently, the transaction manager sends an In Doubt outcome to any applications or voters awaiting informational outcome notification"
Please look into these links
https://msdn.microsoft.com/en-us/library/cc229955.aspx
https://msdn.microsoft.com/en-us/library/system.transactions.ienlistmentnotification.indoubt(v=vs.85).aspx
Add finally block to dispose the transaction. And set some status = true if you commit the transactions.
Check same in finally block and if status is true then don't rollback otherwise rollback and dispose the transaction.
Would it behave properly if you moved your try/catch to include the using directive for the Transaction scope? The using directive should implicitly call Dispose for you (so doing it explicitly seems redundant) and then you can throw the exception. This is what I see implement in examples from MSDN:
https://msdn.microsoft.com/en-us/library/system.transactions.transactionscope.dispose(v=vs.110).aspx
I have got a service that should use distributed transactions.
[OperationBehavior(TransactionScopeRequired = true, TransactionAutoComplete = true)]
public bool ServiceMethod(int parameterPlaceHolder)
{
return SomeOperationResult();
}
For reasons out of my responsibility, this service should never throw faults. On success it returns one value, on failure another (abstracted to a bool here for demo purposes).
The transaction flowing works.
However, the attribute implies that any result that is not an uncaught exception will complete the transaction. That's not the behavior I want. I want to control the outcome of the transaction myself. On returning false, I want to have the transaction fail.
I have tried various methods:
The obvious one: setting TransactionAutoComplete to false. This means that I have to use a session based service. I don't want to. I don't need to. I'm perfectly fine with a single transaction scope per call. But it's not allowed. ("TransactionAutoComplete set to false requires the use of InstanceContextMode.PerSession.")
The DIY one: setting TransactionScopeRequired to false and using my own. This means the flowed transactions no longer work and I create a new local transaction every time.
The desperate one: Trying to get hold of the transaction that WCF creates and rolling it back on my own... this leads to my service throwing exceptions because it tries to AutoComplete a transaction that is long gone.
I'm out of ideas. Does anyone know how to create my own transaction scope, using a flowed distributed transaction, not using the Microsoft AutoComplete-On-Normal-Return pattern? I would like to not complete the transaction without throwing an exception.
Transaction scopes can nest. The entire transaction aborts if you don't completed any scope. So:
using (new TransactionScope()); //Doom transaction
Better comment this line.
You can also try to call stuff on Transaction.Current but I have no experience with that.
I've got a doubt about transactionscope because I'd like to make a transactional operation where first I perform some CRUD operations (a transaction which inserts and updates some rows on the DataBase) and I got a result from the whole transaction (an XML).
After I got the XML I send the XML to a Web Service which my customer exposes to integrate my system with.
The point is, let's imagine that one day the WS that my customer exposes falls down due to a weekly or monthly support task that its IT Area perform, so everymoment I perform the whole thing It performs the DB operation but of course It will throw an exception at the moment that I try to call the WS.
After Searching on the Internet I started to think of Transaction Scope. My Data Access Method which is on my Data Access Layer already has a TransactionScope where I perform insert, update, delete, etc.
The following Code is what I'd like to try:
public void ProcessSomething()
{
using (TransactionScope mainScope = new TransactionScope())
{
FooDAL dl = new FooDAL();
string message = dl.ProcessTransaction();
WSClientFoo client = new WSClientFoo();
client.SendTransactionMessage(message);
mainScope.Complete();
}
}
public class FooDAL
{
public string ProcessTransaction()
{
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required, new TransactionOptions(){ IsolationLevel = IsolationLevel.ReadCommitted}))
{
///Do Insert, Update, Delete and According to the Operation Generates a message
scope.Complete();
}
return transactionMessage;
}
}
The question is, is it correct to use TransactionScope to handle what I want to do ?
Thanks a lot for your time :)
TransactionScopeOption.Required in your FooDAL.ProcessTransaction method means in fact: if there is a transaction available, reuse it in this scope; otherwise, create a new one.
So in short: yes, this is the correct way of doing this.
But be advised that if you don't call scope.Complete() in FooDAL.ProcessTransaction, a call to mainScope.Complete() will crash with a 'TransactionAbortedException' or something like that, which makes sense: if a nested scope decides that the transaction cannot be committed the outer scope should not be able to commit it.
I was going through a piece of code and came across the following:
using(var transactionScope = new TransactionScope(TransactionScopeOption.Required, new TransactionScopeOptions { IsolationLevel = IsolationLevel.Snapshot })
{
List<Task> tasks = new List<Task>();
try
{
// Perform some database operation to read data (These operations are happening with a transaction scope having scopeoption as "Required" and isolationlevel as "ReadCommitted")
// Filter the data
// At this point the code already has a reference to a WCF duplex callback
// Create a List<Task> and a
foreach(var data in List<SomeData>)
{
var task = Task.Factory.StartNew(() => {
**(WCF Duplex Callback Instance).Process(data);**
});
tasks.Add(task);
}
}
catch(Exception ex)
{
// Log exception details
}
transactionScope.Complete();
}
try
{
Task.WaitAll(tasks);
}
catch(AggregateException ae)
{
ae.Handle( ex => {
// log exception details
return true;
});
}
Questions:
The parent transaction isolation level is "Snapshot" while the inner database reads are using "ReadCommitted". What will be the actual transaction isolation level?
Let's say there are two tasks. Task 1 processes just fine and sends to the WCF client on the callback channel. But task 2 raises an exception. I guess at this time all the activities performed within the parent transaction scope should rollback. But I'm not sure what it means to rollback a set of data already sent over the WCF callback channel that has reached the client.
1) It depends, if you mean nested TransactionScope's then according to MSDN you cannot have them nested with different isolation level:
When using nested TransactionScope objects, all nested scopes must be
configured to use exactly the same isolation level if they want to
join the ambient transaction. If a nested TransactionScope object
tries to join the ambient transaction yet it specifies a different
isolation level, an ArgumentException is thrown
However if you are using some stored procedures, functions or just running raw SQL you may explicitly change the isolation level and it remains set for that connection until it is explicitly changed again. But please note it will not be propagated back to TransactionScope object.
2) It means that all changes done via a resource manager will be rollbacked. Of course if you just query a database and transfer the results back over a channel there is nothing to rollback but if you update a database for example the changes should be rollbacked in this case.
Hope it helps!
I think I might have my Unit of Work set up wrong in my architecture. Here is what I currently have (indented to show order):
HttpRequest.Begin()
UnitOfWork.Begin()
Session.BeginTransaction(System.Data.IsolationLevel.ReadCommitted);
Here, I call various services to perform crud using NHibernate. When I want to make a change to the database (update/save), I call this code:
using (var transaction = unitOfWork.Session.BeginTransaction())
{
try
{
// These are just generics
ret = (Key)unitOfWork.Session.Save(entity);
transaction.Commit();
rb.unitOfWork.Session.Clear();
}
catch
{
transaction.Rollback();
rb.unitOfWork.Session.Clear();
rb.unitOfWork.DiscardSession();
throw;
}
}
When the HttpRequest is over, I perform these steps:
UnitOfWork.Commit()
Transaction.Commit() // This is my sessions transaction from the begin above
I am running into issues with being able to rollback large batch processes. Because I am committing my transactions in my CRUD layer, as seen above, my transaction is no longer active and when I try to rollback in my UnitOfWork, it does nothing because of the transaction already being committed. The reason I'm committing my code in my CRUD layer is so I can persist my data as quickly as possible without locking the database for too long.
What is the best course of action to take with a situation like the one above? Do I just make special CRUD operation that don't commit for batch jobs and just handle the commit at the end of my job, or is my logic just flawed with my UnitOfWork and Session Per Request? Any suggestions?
You've discovered the reason why the session-per-request pattern is so popular and the problems that can stem from micro-managing your unit of work.
Typically with each web request, everything that needs to be done within that request can be thought of as one unit of work so it stands to reason that you should only have one unit of work and one NHibernate session open during that single web request.
Also, I think you may be a bit confused about how NHibernate works due to this sentence in your question: "The reason I'm committing my code in my CRUD layer is so I can persist my data as quickly as possible without locking the database for too long."
NHibernate is not going to be causing any locking in your database. Everytime you call ISession.Save(entity), as long as you do not call ISession.Flush() or ITransaction.Commit(), nothing will be written to the database rather it will be added to a queue of items to be inserted or updated in the database when the current transaction is committed at the end of the web request.
So your session per request should be setup like so:
void Application_BeginRequest()
{
// Start your unit of work, open a session and begin a transaction
}
// Do all of your work ( Read, insert, update, delete )
void Application_EndRequest()
{
try
{
// UnitOfWork.Current.Transaction.Commit();
}
catch( Exception e )
{
// UnitOfWork.Current.Transaction.Rollback();
}
}
Of course there are many ways to do this same thing but this is the basics of the session per request pattern --only one session for the entire web request.