How do transactions work in nhibernate? - c#

I just started learning nHibernate and I'm confused by transactions. I know that nhibernate tracks all changes to persistent objects in a session and those changes get sent to database on commit, but what is the purpose of the transactions?
If I wrap code in a 'using transaction' block and call commit does it just commit the object changes that occurred within the transaction or does it commit all changes that occurred within the session since last commit of flush?

The purpose of transactions is to make sure that you dont commit a session with dirty data or error on it. Consider the very simple case of a transaction of placing an order for a book.
You will probably do the following actions:
a) Check if the book exists at this moment.
b) Read the customer details and see if he has anything in the shopping cart.
c) Update the book count
d) Make an entry for the order
Now consider the case where in you run into an error while the order is being entered obs you want your other changes to be rolled back and that is when you roll back the transaction.
How do you do it? Well there are many ways. One of the ways for web apps is to monitor the HTTP Error object as follows:
if(HttpContext.Current != null && HttpContext.Current.Error != null)
transaction.Rollback();
Ideally you should not break your unit of work pattern by using explicit transaction blocks. Try to avoid doing this as much as possible

If you don't use transactions then anytime NHibernate sends a batch, that alone will be a transaction. I'm not sure if the session.Flush() uses a batch or not. Let's suppose it does. Your first call to session.Flush() would result in a transaction. Suppose your second call to flush results in a an error. The changes from the first flush would remain in the DB.
If on the other hand you're using an explicit transaction, you can call flush a million times but if you roll back the transaction (maybe because the millionth and one flush threw errors) then all the flushes got rolled back.
Hope that makes sense.

Related

Distributed transaction with 2 phase commit - atomicity issue due to commit phase not being synchronous

I have a WCF service which includes an operation marked with OperationBehavior(TransactionScopeRequired=true) and TransactionFlow(TransactionFlowOption.Allowed).
The client of this service is also part of the transaction (it has a database as well. This is a simplified example), so this involves distributed transactions and 2-phase-commit.
In order for my database operations to support 2-phase-commit, I've implemented the IEnlistmentNotification interface.
In the prepare phase I write the data to the DB with a transaction tag on it, and in the commit phase I remove the transaction tag from the data. Note that the commit phase includes database access, so it may be a bit slow.
The problem is that from what it seems and from what I've read, the Commit phase is run asynchronously, so for example, the following sequential scenario may not work:
1) Transaction 1: Client inserts A
2) Transaction 2: Client inserts B which relies on A (server looks up A, extracts information from it and uses it to insert B)
Since the commit phase of transaction 1 may not have yet finished on the server side, transaction 2 may not find A (since it's still marked with 'transaction 1' tag).
These 2 transactions may quickly occur one after the other, so it's a matter of race condition.
The way I noticed it is when I enabled logging of my DB driver, then the commit became a bit slower and an error occurred on the 2nd transaction. If I disabled the logging then it succeeded. But even if I disable the logging, it's still a matter of race condition, and I wouldn't rely on it in a production environment.
What would be the best way to tackle this issue?
I already faced that problem, and we managed it with an orchestrated saga. All you need to implement it is a message queue (apache kafka, rabbit MQ...). Then each client has to send a simple data as a notification of the executed event like {"event": "projectAdded"...}.
And you'll need a coordinator which is going to subscribe to that event and send a new event to the next client like {"event": "sendNotification"..} who will be listening to that event to start working.
For the consistence, you can even send some events like {"error": "projectAdditionFailed"...} so to rollback and compensate the executed events

SQL Server errors in trigger that locks table lost with SqlDataAdapter and ambient transaction

Okay, so I've run into a rather bizarre circumstance. There are several layers to my situation. I haven't identified whether every layer is strictly required, but here's what's going on:
C# code is creating an ambient transaction, into which a SqlConnection is automatically enlisting.
C# code is using a SqlDataAdapter to insert a row into a table.
The InsertCommand is referencing a stored procedure. The stored procedure is a straightforward INSERT statement.
The table into which the INSERT is being done has a trigger on it INSTEAD OF INSERT.
The trigger obtains an exclusive lock on the table.
An error occurs within the trigger.
With this conjunction, the error is not raised within the C# code. However, if the trigger does not obtain an exclusive lock on the table, the error does make it up to the C# code.
The error is actually happening, though, evidenced by the fact that on the SQL Server side, the transaction has been aborted. The C# code doesn't know that the transaction has been aborted, and only encounters an error when the disposal of the TransactionScope tries to COMMIT TRANSACTION.
I have created a minimal reproduction of this scenario:
https://github.com/logiclrd/TestErrorWhileLockedInTrigger
Does anyone have any understanding of why this might be, and how proper error handling behaviour might be restored?
So, I've done some more testing of this.
My first thought was, if holding the exclusive lock is causing it to squelch the error, maybe explicitly releasing the lock will unsquelch it? So, I put a TRY/CATCH around the statement that generates the error in my proof-of-concept, had it ROLLBACK TRANSACTION and then re-THROW, but it didn't do anything.
So then my next thought was, the RAISERROR statement, when used with severity levels 20-25, forcibly terminates the connection. I'm not sure if this is an ideal solution, because it also writes an entry to the SQL Server event log when this happens. However, it does achieve the goal of having the SqlDataAdapter see the error during its Update command instead of the C# code thinking the transaction is still active and trying to commit it.
Does anyone know of other potential downsides to this "sledgehammer" approach, or is it possibly going to be the only way to get the error to propagate properly in this circumstance?
I have identified the cause of the problem.
The statement in the trigger locking the table looked like this:
SELECT TOP 0 *
FROM TableToTriggerAndLock WITH (TABLOCKX, HOLDLOCK)
While this returns no data, it does return an (empty) result set. It turns out the SqlDataAdapter class only cares about the first result set it gets back on the TDS stream, so the error coming back in the second result set is completely passed over.
Take out the locking statement, and you take out that redundant result set, and now the error is in the first result set.
The solution, then, is to suppress the result set, which I did by reworking the locking statement as:
DECLARE #Dummy INT
SELECT TOP 0 #Dummy = 1
FROM TableToTriggerAndLock WITH (TABLOCKX, HOLDLOCK)
Hope this helps someone out there working with SqlDataAdapter and more complicated underlying operations. :-)

Bulletproof approach to tackle SQL transaction timeouts

Recently we faced quite an interesting issue that has to do with SQL transactions timeout. The statement that timed out does not really matter for the sake of question, but it was single INSERT statement w/o explicit transaction with client generated GUID as a key:
INSERT MyTable
(id, ...)
VALUES (<client-app-generated-guid>, ...)
We also have a retry policies in-place, so that if command fails with SqlException, then it will be retried. SQL Server (Azure SQL) did not behave normally one day and we faced a lot of strange PK violation errors during retries. They were caused by retrying actually successfully committed on the SQL Server transaction (so that causes insert with already taken ID). I understand that SQL timeout it's purely client side concept, so if Client thinks that SqlCommand failed - it might or might not mean it.
I suspect that Client explicit transaction control via for instance wrapping statements with TransactionScope as shown bellow will fix 99% of such troubles -- because Commit is actually quite fast&cheap operation. However, I still see the caveat there -- the timeout also can happen on Committing stage. The application again can be in conditions where it's impossible to guess whether transaction really committed or not (to figure out necessity of retry).
The question is how to write code in bulletproof (to such kind of troubles) and generic fashion and do a retry only when it's positively clear that transaction was not committed.
using (var trx = new TransactionScope())
using (var con = GetOpenConnection(connectionString))
{
con.Execute("<some-non-idempotent-query>");
// what if Complete() times out?!
// to retry or not to retry?!
trx.Complete();
}
The problem is that the Exception does not mean that the transaction failed. For any compensating action (like retrying) you need to have a definite way of telling if it failed. There are scalability issues with what I will suggest, but its the technique that is the important thing, the scalability issues can be solved in other ways.
My solution;
the last INSERT before COMMIT is to write a Guid to a tracking table.
if an exception occurs, that indicates a network failure, SELECT ##TRANCOUNT. If it indicates you are still in a transaction (is greater than 0)(which probably should never happen, but its worth checking) then you can happily resubmit your COMMIT
If ##TRANCOUNT returns 0 then you are no longer in a transaction. Selecting your Guid from the tracking table will tell you whether your COMMIT was successful.
If your commit was not successful (##TRANCOUNT ==0 and your Guid is not present in the tracking table) then resubmit your entire batch from the BEGIN TRANSACTION onwards.
The general approach is: try to read back what you just tried to insert.
If you can read back the ID that you tried to insert, then previous transaction committed successfully, no need to retry.
If you can't find the ID that you tried to insert, then you know that your attempt to insert has failed, so you should retry.
I'm afraid there is no way to have a completely generic pattern that would work for any SQL statement. Your "checking" code needs to know what to look for.
If it is INSERT with ID - then you are looking for that ID.
If it is some UPDATE, then the check would be custom and depend on the nature of that UPDATE.
If it is DELETE, then the check consists of trying to read what was meant to be deleted.
Actually, here is a generic pattern: any data modification batch that has one or multiple INSERT, UPDATE, DELETE statements should have one more INSERT statement within that transaction that inserts some GUID (some ID of the data modifying transaction itself) into a dedicated audit table. Then your checking code tries to read that same GUID from that dedicated audit table. If GUID is found, then you know that previous transaction committed successfully. If GUID is not found, then you know that previous transaction was rolled back and you can retry.
Having this dedicated audit table unifies/standardize the checks. The checks no longer depend on internals and details of your data changing code. Your data modification code and verification code depend on the same agreed interface - audit table.

Isolation level in Sql Transaction

I have implemented SqlTransaction in c# to begin, commit and rollback transaction. Everything is going right, but I've got some problem while accessing those tables which are in connection during transaction.
I was not able to read table during the transaction(those table which are in transaction). While searching about this, I found that it happens due to an exclusive lock. Any subsequent selects on that data in turn have to wait for the exclusive lock to be released. Then, I have gone through every isolation level provided by SqlTransaction, but it did not work.
So, I need to release exclusive lock during transaction so that other user can have access on that table and can read the data.
Is there any method to achieve this?
Thanks in advance.
Here's my c# code for the transaction
try
{
SqlTransaction transaction = null;
using (SqlConnection connection=new SqlConnection(Connection.ConnectionString))
{
connection.Open();
transaction=connection.BeginTransaction(IsolationLevel.Snapshot,"FaresheetTransaction");
//Here all transaction occurs
if (transaction.Connection != null)
{
transaction.Commit();
transaction.Dispose();
}
}
}
catch (Exception ex)
{
if (transaction.Connection != null)
transaction.Rollback();
transaction.Dispose();
} `
This code is working fine, but the problem is that when I access the data of tables (those accessed during the transaction) during the time of transaction. The tables are being accessed by other parts of the application. So, when I tried to read data from the table, it throws an exception.
A SQL transaction is, by design, ACID. In particular, it is the "I" that is hurting you here - this is designed to prevent other connections seeing the inconsistent intermediate state.
An individual reading connection can elect to ignore this rule by using the NOLOCK hint, or the READ UNCOMMITTED isolation level, but it sounds like you want is for the writing connection to not take locks. Well, that isn't going to happen.
However, what might help is for readers to use snapshot isolation, which achieves isolation without the reader taking locks (by looking at, as the name suggests, a point-in-time shapshot of the consistent state when the transaction started).
However, IMO you would be better advised to look at either:
multiple, more granular, transactions from the writer
performing the work in a staging table (a parallel copy of the data), then merging that into the real data in a few mass-insert/update/delete operations, minimising the transaction time
The first is simpler.
The simple fact is: if you take a long-running transaction that operates on a lot of data, yes you are going to be causing problems. Which is why you don't do that. The system is operating correctly.
Try to execute your reads within a transaction as well and use the isolation level READ UNCOMMITTED. This will prevent the read from being locked, but might produce invalid results:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
BEGIN TRANSACTION
SELECT * FROM Table
COMMIT TRANSACTION
There is a misconception that dealing with transactions/isolation levels only matters when writing, when in fact it is equally important when reading.
#AKASH88, SNAPSHOT isolation level is what you are looking for.
You say that even with SNAPSHOT it is not working as expected, exclusive lock is happening, I can understand that, I had the same issue.
Make sure you don't just enable SNAPSHOT on the database options, but also READ COMMITTED SNAPSHOT must be turned on.
This is SQL Server 2008, so it's still uncertain if this answer will help :(
Best regards!
The problem is not on the level of writing into database but on the level of reading values. You are trying to read values that are inserting. Try to change your select query to following:
select * from your_table_with_inserts with (nolock)
however this one overrides isolation level of current transaction and can cause dirty reads.
So the question is : if you are using transaction on all queries or only insert/update?

Can a Snapshot transaction fail and only partially commit in a TransactionScope?

Greetings
I stumbled onto a problem today that seems sort of impossible to me, but its happening...I'm calling some database code in c# that looks something like this:
using(var tran = MyDataLayer.Transaction())
{
MyDataLayer.ExecSproc(new SprocTheFirst(arg1, arg2));
MyDataLayer.CallSomethingThatEventuallyDoesLinqToSql(arg1, argEtc);
tran.Commit();
}
I've simplified this a bit for posting, but whats going on is MyDataLayer.Transaction() makes a TransactionScope with the IsolationLevel set to Snapshot and TransactionScopeOption set to Required. This code gets called hundreds of times a day, and almost always works perfectly. However after reviewing some data I discovered there are a handful of records created by "SprocTheFirst" but no corresponding data from "CallSomethingThatEventuallyDoesLinqToSql". The only way that records should exist in the tables I'm looking at is from SprocTheFirst, and its only ever called in this one function, so if its called and succeeded then I would expect CallSomethingThatEventuallyDoesLinqToSql would get called and succeed because its all in the same TransactionScope. Its theoretically possible that some other dev mucked around in the DB, but I don't think they have. We also log all exceptions, and I can find nothing unusual happening around the time that the records from SprocTheFirst were created.
So, is it possible that a transaction, or more properly a declarative TransactionScope, with Snapshot isolation level can fail somehow and only partially commit?
We have spotted the same issue. I have recreated it here - https://github.com/DavidBetteridge/MSMQStressTest
For us we see the issue when reading from the queue rather than writing to it. Our solution was to change the isolation level of the first read in the subscriber to be serialised.
no, but snapshot isolation level isn't the same as serializable.
snapshoted rows are stored in the tempdb until the row commits.
so some other transaction can read the old data just fine.
at least that's how i understood your problem. if not please provide more info like a grapf of the timeline or something similar.
Can you verify that CallSomethingThatEventuallyDoesLinqToSQL is using the same Connection as the first call? Does the second call read data that the first filed into the db... and if it is unable to "see" that data would cause the second to skip a few steps and not do it's job?
Just because you have it wrapped in a .NET transaction doesn't mean the data as seen in the db is the same between connections. You could for instance have connections to two different databases and want to rollback both if one failed, or file data to a DB and post a message to MSMQ... if MSMQ operation failed it would roll back the DB operation too. .NET transaction would take care of this multi-technology feature for you.
I do remember a problem in early versions of ADO.NET (maybe 3.0) where the pooled connection code would allocate a new db connection rather than use the current one when a .NET level TransactionScope was used. I believe it was fully implemented with 3.5 (I may have my versions wrong.. might be 3.5 and 3.5.1). It could also be caused by the MyDataLayer and how it allocates connections.
Use SQL Profiler to trace these operations and make sure the work is being done on the same spid.
It sounds like your connection may not be enlisted in the transaction. When do you create your connectiion object? If it occurs before the TransactionScope then it will not be enlisted in the transaction.

Categories

Resources