I have implemented SqlTransaction in c# to begin, commit and rollback transaction. Everything is going right, but I've got some problem while accessing those tables which are in connection during transaction.
I was not able to read table during the transaction(those table which are in transaction). While searching about this, I found that it happens due to an exclusive lock. Any subsequent selects on that data in turn have to wait for the exclusive lock to be released. Then, I have gone through every isolation level provided by SqlTransaction, but it did not work.
So, I need to release exclusive lock during transaction so that other user can have access on that table and can read the data.
Is there any method to achieve this?
Thanks in advance.
Here's my c# code for the transaction
try
{
SqlTransaction transaction = null;
using (SqlConnection connection=new SqlConnection(Connection.ConnectionString))
{
connection.Open();
transaction=connection.BeginTransaction(IsolationLevel.Snapshot,"FaresheetTransaction");
//Here all transaction occurs
if (transaction.Connection != null)
{
transaction.Commit();
transaction.Dispose();
}
}
}
catch (Exception ex)
{
if (transaction.Connection != null)
transaction.Rollback();
transaction.Dispose();
} `
This code is working fine, but the problem is that when I access the data of tables (those accessed during the transaction) during the time of transaction. The tables are being accessed by other parts of the application. So, when I tried to read data from the table, it throws an exception.
A SQL transaction is, by design, ACID. In particular, it is the "I" that is hurting you here - this is designed to prevent other connections seeing the inconsistent intermediate state.
An individual reading connection can elect to ignore this rule by using the NOLOCK hint, or the READ UNCOMMITTED isolation level, but it sounds like you want is for the writing connection to not take locks. Well, that isn't going to happen.
However, what might help is for readers to use snapshot isolation, which achieves isolation without the reader taking locks (by looking at, as the name suggests, a point-in-time shapshot of the consistent state when the transaction started).
However, IMO you would be better advised to look at either:
multiple, more granular, transactions from the writer
performing the work in a staging table (a parallel copy of the data), then merging that into the real data in a few mass-insert/update/delete operations, minimising the transaction time
The first is simpler.
The simple fact is: if you take a long-running transaction that operates on a lot of data, yes you are going to be causing problems. Which is why you don't do that. The system is operating correctly.
Try to execute your reads within a transaction as well and use the isolation level READ UNCOMMITTED. This will prevent the read from being locked, but might produce invalid results:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
BEGIN TRANSACTION
SELECT * FROM Table
COMMIT TRANSACTION
There is a misconception that dealing with transactions/isolation levels only matters when writing, when in fact it is equally important when reading.
#AKASH88, SNAPSHOT isolation level is what you are looking for.
You say that even with SNAPSHOT it is not working as expected, exclusive lock is happening, I can understand that, I had the same issue.
Make sure you don't just enable SNAPSHOT on the database options, but also READ COMMITTED SNAPSHOT must be turned on.
This is SQL Server 2008, so it's still uncertain if this answer will help :(
Best regards!
The problem is not on the level of writing into database but on the level of reading values. You are trying to read values that are inserting. Try to change your select query to following:
select * from your_table_with_inserts with (nolock)
however this one overrides isolation level of current transaction and can cause dirty reads.
So the question is : if you are using transaction on all queries or only insert/update?
Related
Recently we faced quite an interesting issue that has to do with SQL transactions timeout. The statement that timed out does not really matter for the sake of question, but it was single INSERT statement w/o explicit transaction with client generated GUID as a key:
INSERT MyTable
(id, ...)
VALUES (<client-app-generated-guid>, ...)
We also have a retry policies in-place, so that if command fails with SqlException, then it will be retried. SQL Server (Azure SQL) did not behave normally one day and we faced a lot of strange PK violation errors during retries. They were caused by retrying actually successfully committed on the SQL Server transaction (so that causes insert with already taken ID). I understand that SQL timeout it's purely client side concept, so if Client thinks that SqlCommand failed - it might or might not mean it.
I suspect that Client explicit transaction control via for instance wrapping statements with TransactionScope as shown bellow will fix 99% of such troubles -- because Commit is actually quite fast&cheap operation. However, I still see the caveat there -- the timeout also can happen on Committing stage. The application again can be in conditions where it's impossible to guess whether transaction really committed or not (to figure out necessity of retry).
The question is how to write code in bulletproof (to such kind of troubles) and generic fashion and do a retry only when it's positively clear that transaction was not committed.
using (var trx = new TransactionScope())
using (var con = GetOpenConnection(connectionString))
{
con.Execute("<some-non-idempotent-query>");
// what if Complete() times out?!
// to retry or not to retry?!
trx.Complete();
}
The problem is that the Exception does not mean that the transaction failed. For any compensating action (like retrying) you need to have a definite way of telling if it failed. There are scalability issues with what I will suggest, but its the technique that is the important thing, the scalability issues can be solved in other ways.
My solution;
the last INSERT before COMMIT is to write a Guid to a tracking table.
if an exception occurs, that indicates a network failure, SELECT ##TRANCOUNT. If it indicates you are still in a transaction (is greater than 0)(which probably should never happen, but its worth checking) then you can happily resubmit your COMMIT
If ##TRANCOUNT returns 0 then you are no longer in a transaction. Selecting your Guid from the tracking table will tell you whether your COMMIT was successful.
If your commit was not successful (##TRANCOUNT ==0 and your Guid is not present in the tracking table) then resubmit your entire batch from the BEGIN TRANSACTION onwards.
The general approach is: try to read back what you just tried to insert.
If you can read back the ID that you tried to insert, then previous transaction committed successfully, no need to retry.
If you can't find the ID that you tried to insert, then you know that your attempt to insert has failed, so you should retry.
I'm afraid there is no way to have a completely generic pattern that would work for any SQL statement. Your "checking" code needs to know what to look for.
If it is INSERT with ID - then you are looking for that ID.
If it is some UPDATE, then the check would be custom and depend on the nature of that UPDATE.
If it is DELETE, then the check consists of trying to read what was meant to be deleted.
Actually, here is a generic pattern: any data modification batch that has one or multiple INSERT, UPDATE, DELETE statements should have one more INSERT statement within that transaction that inserts some GUID (some ID of the data modifying transaction itself) into a dedicated audit table. Then your checking code tries to read that same GUID from that dedicated audit table. If GUID is found, then you know that previous transaction committed successfully. If GUID is not found, then you know that previous transaction was rolled back and you can retry.
Having this dedicated audit table unifies/standardize the checks. The checks no longer depend on internals and details of your data changing code. Your data modification code and verification code depend on the same agreed interface - audit table.
I have c# code that will run several stored procedures and this is all contained in transaction. As I am going through the methods and procedures, I would like to check data in the backend through SSMS. I have put breakpoints right before a commit occurs and after a Transaction has begun. Is there a way to do a dirty read through SSMS?
Source Microsoft SQL
Try this
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
select * from TableName;
Implements dirty read, or isolation level 0 locking, which means that no shared locks are issued and no exclusive locks are honored. When this option is set, it is possible to read uncommitted or dirty data; values in the data can be changed and rows can appear or disappear in the data set before the end of the transaction. This option has the same effect as setting NOLOCK on all tables in all SELECT statements in a transaction. This is the least restrictive of the four isolation levels.
I have an ASP.NET MVC application using EF6 and SQL Server with up to 15 or so concurrent users. To ensure the consistency of data between different queries during each page request, I have everything enclosed in transactions (using System.Transactions.TransactionScope).
When I use IsolationLevel.ReadCommitted and .Serializable, I get deadlock errors like this:
Transaction (Process ID #) was deadlocked on lock resources with another process and has been chosen as the deadlock victim.
When I use IsolationLevel.Snapshot, I get errors like this:
Snapshot isolation transaction aborted due to update conflict. You cannot use snapshot isolation to access table 'dbo.#' directly or indirectly in database '#' to update, delete, or insert the row that has been modified or deleted by another transaction. Retry the transaction or change the isolation level for the update/delete statement.
These errors are the least frequent when using IsolationLevel.Snapshot (one to three per day, roughly).
My understanding of the issue leads me to believe that the only ways to guarantee zero transaction failures is to either:
Completely serialize all database access, or
Implement some type of transaction retry functionality.
And I can't do 1 because some tasks and requests take a while to run, while other parts of the application need to stay reasonably responsive.
I'm inclined to think retry could be implemented by getting MVC to re-run the controller action, but I don't know how to go about doing such a thing.
I also don't know how to reproduce the errors that my users are causing. All I get right now are rather uninformative exception logs. I could set up EF to log all SQL that gets run on the DB, now that EF6 lets you do that, but I'm not sure how helpful that would actually be.
Any ideas?
Regardless of isolation level, there are two categories of locks. EXCLUSIVE for INSERT, DELETE, UPDATE and shared for SELECT.
You should try the limit the transaction time for EXCLUSIVE locks to a minimum. The default isolation level is READ COMMITTED. If you are writing/running reports against the OLTP systems, writers will block readers. You might get blocking issues.
In 2005, READ COMMITTED SNAPSHOT ISOLATION was introduced. For readers, the version store in tempdb is used to capture a snapshot of the data to satisfy the current query. A-lot less overhead than SNAPSHOT ISOLATION. In short readers are now not blocked by writers.
This should fix your blocking issues. You need to remove any table hints or isolation commands you currently have.
See article from Brent Ozar.
http://www.brentozar.com/archive/2013/01/implementing-snapshot-or-read-committed-snapshot-isolation-in-sql-server-a-guide/
Will it fix your deadlock? Probably not.
Deadlocks are causes by two or more resources exclusive lock in opposite order.
Check out MSDN = way cooler pictures and mentions deadlock flags.
http://technet.microsoft.com/en-us/library/ms178104(v=sql.105).aspx
Process 1
DEBIT BANK ACCOUNT
CREDIT VENDOR ACCOUNT
Process 2
CREDIT VENDOR ACCOUNT
DEBIT BANK ACCOUNT
In short, change the order of your DML to have consistent access to the tables. Turn on a trace flag to get the actual TSQL causing the issue.
Last but not least, check out application locks as a last resort. The can be used as MUTEX's on code that might be causing deadlocks.
http://www.sqlteam.com/article/application-locks-or-mutexes-in-sql-server-2005
Since I have a "DB util" class with a DataSet QueryDB(string spName, DBInputParams inputParams) method which I use for all my calls to the database, I would like to reuse this method in order to support transacted calls.
So, at the end I will have a SqlDataAdapter.Fill within a SqlTransaction. Will this be a bad practice? Because rarely I see usage of DataAdapter.Fill within a transaction and more often ExecuteReader(). Is there any catch?
Edit1: The thing is that inside my transaction is often needed to retrieve also some data (e.g auto-IDs)... that's why I would like to get it as DataSet.
Edit2: Strange is when I use this approach in a for loop (10000) from 2 different processes, I get "Transaction (Process ID 55) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction." . Is this the right behaviour?
Edit3: (answer for Edit2) I was using IDENT_CURRENT('XTable') which was the source of the error. After I went back to SCOPE_IDENTITY(), everything has been solved.
It is not a bad practice. One thing to remember is that all statements will use an implicit transaction that they will automatically commit when the statement ends. That is a SELECT (as in the SELECT used by Fill) will always use a transaction, the question is whether it will have to start it on itself or it will use the existing one.
Is there any difference between the number, type and duration of locks acquired by a SELECT in an implicit transaction vs. an explicit transaction? Under the default transaction model (READ COMMITTED isolation) NO, there is none. The behavior is identical and indistinguishable. Under other isolation levels (repeatable read, serializable) there is a difference, but that is the necessary difference for the desired higher isolation level to occur and using an explicit transaction is the only way to achieve this desired isolation level, when necessary.
In addition if the SELECT has to read the effects of a transaction that is pending (not yet committed), as in your example (read back the generated IDs) then there is no other way. The SELECT must be part of the transaction that generated the IDs, otherwise it will not be able to see those uncommitted IDs!
A note of caution though. I believe you have at your disposal a great tool that can make all this transaction handling much easier: the System.Transactions. All ADO.Net code is system transaction aware and will automatically enroll any connection and command into the pending transaction, if you simply declare a TransactionScope. That is if function Foo declares a TransactionScope and then calls function Bar, if Bar does any ADO.Net operatio, it will automatically be part of the transaction declared in Foo, even if Bar does nothing explicitly. The TransactionScope is hooked into the thread context and all ADO.Net call called by Bar will check for this context automatically, and use it. Note that I really mean any ADO.Net call, including Oracle provider ones. Alas though there is a warning: using new TransactionScope() Considered Harmful: the default constructor of TransactionScope will create a serializable transaction, which is overkill. You have to use the constructor that takes a TransactionOptions object and change the behavior to ReadCommitted. A second gotcha with TransactionScope is that you have to be very careful how you manage connections: if you open more than one connection under a scope then they will be enrolled in a distributed transaction, which is slow and requires MSDTC to be configured, and leads to all sort of hard to debug errors. But overall I fell that the benefits of using TransactionScope outweight the problems, and the resulted code is always more elegant than passing around IDbTransaction explicitly.
It is a bad practice because while the transaction is open, records/pages/tables that you make changes to are locked for the duration of the transaction. The fill just makes the whole process keep those resources locked longer. Depending on your sql settings, this could block other accesses to those resources.
That said, if it is necessary, it is necessary, just realize the penalty for doing it.
Greetings
I stumbled onto a problem today that seems sort of impossible to me, but its happening...I'm calling some database code in c# that looks something like this:
using(var tran = MyDataLayer.Transaction())
{
MyDataLayer.ExecSproc(new SprocTheFirst(arg1, arg2));
MyDataLayer.CallSomethingThatEventuallyDoesLinqToSql(arg1, argEtc);
tran.Commit();
}
I've simplified this a bit for posting, but whats going on is MyDataLayer.Transaction() makes a TransactionScope with the IsolationLevel set to Snapshot and TransactionScopeOption set to Required. This code gets called hundreds of times a day, and almost always works perfectly. However after reviewing some data I discovered there are a handful of records created by "SprocTheFirst" but no corresponding data from "CallSomethingThatEventuallyDoesLinqToSql". The only way that records should exist in the tables I'm looking at is from SprocTheFirst, and its only ever called in this one function, so if its called and succeeded then I would expect CallSomethingThatEventuallyDoesLinqToSql would get called and succeed because its all in the same TransactionScope. Its theoretically possible that some other dev mucked around in the DB, but I don't think they have. We also log all exceptions, and I can find nothing unusual happening around the time that the records from SprocTheFirst were created.
So, is it possible that a transaction, or more properly a declarative TransactionScope, with Snapshot isolation level can fail somehow and only partially commit?
We have spotted the same issue. I have recreated it here - https://github.com/DavidBetteridge/MSMQStressTest
For us we see the issue when reading from the queue rather than writing to it. Our solution was to change the isolation level of the first read in the subscriber to be serialised.
no, but snapshot isolation level isn't the same as serializable.
snapshoted rows are stored in the tempdb until the row commits.
so some other transaction can read the old data just fine.
at least that's how i understood your problem. if not please provide more info like a grapf of the timeline or something similar.
Can you verify that CallSomethingThatEventuallyDoesLinqToSQL is using the same Connection as the first call? Does the second call read data that the first filed into the db... and if it is unable to "see" that data would cause the second to skip a few steps and not do it's job?
Just because you have it wrapped in a .NET transaction doesn't mean the data as seen in the db is the same between connections. You could for instance have connections to two different databases and want to rollback both if one failed, or file data to a DB and post a message to MSMQ... if MSMQ operation failed it would roll back the DB operation too. .NET transaction would take care of this multi-technology feature for you.
I do remember a problem in early versions of ADO.NET (maybe 3.0) where the pooled connection code would allocate a new db connection rather than use the current one when a .NET level TransactionScope was used. I believe it was fully implemented with 3.5 (I may have my versions wrong.. might be 3.5 and 3.5.1). It could also be caused by the MyDataLayer and how it allocates connections.
Use SQL Profiler to trace these operations and make sure the work is being done on the same spid.
It sounds like your connection may not be enlisted in the transaction. When do you create your connectiion object? If it occurs before the TransactionScope then it will not be enlisted in the transaction.