SqlException: Deadlock - c#

I have these two exceptions generated when I try to get data from SQL database in C#:
System.Data.SqlClient.SqlException: Transaction (Process ID 97) was deadlocked on lock resources with another process and has been chosen as the deadlock victim.
OR
System.Data.SqlClient.SqlException: Transaction (Process ID 62) was deadlocked on lock resources with another process and has been chosen as the deadlock victim.
OR
System.Data.SqlClient.SqlException: Transaction (Process ID 54) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
this is the code:
using (SqlConnection con = new SqlConnection(datasource))
{
SqlCommand cmd = new SqlCommand("Select * from MyTable Where ID='1' ", con);
cmd.CommandTimeout = 300;
con.Open();
SqlDataAdapter adapter = new SqlDataAdapter(cmd);
DataSet ds = new DataSet();
adapter.Fill(ds);
con.Close();
return ds.Tables[0];
}
These happened every time.
Any ideas on how these can be resolved?

There are a couple of things you can do to lessen the number of deadlocks you receive, and some things you can do to completely eliminate them.
First off, launch SQL Server Profiler and tell it to give you a deadlock graph. Running this trace will tell you the other query which is conflicting with yours. Your query is quite simple, though I seriously doubt you have a SELECT * query off a table called MyTable in your system...
Anyway, armed with the deadlock graph and the other query, you should be able to tell what resources are deadlocking. The classic solution is to change the order of both queries such that the resources are accessed in the same order -- this avoids cycles.
Other things you can do:
Speed up your queries by, among other things, applying the correct indexes to them.
Enable snapshot isolation on the database and use SET TRANSACTION ISOLATION LEVEL SNAPSHOT in your transactions where appropriate. Also enable read committed with row-versioning. In many cases, this is enough to eliminate most deadlocks completely. Read about transaction isolation levels. Understand what you're doing.

Not that this is going to help the deadlock issue, but you should be disposing your other IDisposable objects much like you're disposing your SqlConnection as such:
using (SqlConnection con = new SqlConnection(datasource))
using (SqlCommand cmd = new SqlCommand("Select * from MyTable Where ID='1' ", con))
{
cmd.CommandTimeout = 300;
con.Open();
using (SqlDataAdapter adapter = new SqlDataAdapter(cmd))
using (DataSet ds = new DataSet())
{
adapter.Fill(ds);
return ds.Tables[0];
}
}
You might be able to avoid the lock with a locking hint in your query thusly:
Select * from MyTable with (nolock) Where ID='1'
I want to be clear though, you're allowing for reads of uncommitted data with this solution. It's a risk in a transactional system. Read this answer. Hope this helps.

Basically, the SQL server concurrency model makes it so you can never avoid this exception (eg. completely unrelated transaction might block eachother if they happen to lock the same index page or something). The best you can do is keep your transactions short to reduce the likelyhood, and if you get the exception, do what it says and retry the transaction.

Related

C#, SQLServer - block table row reading, edit if needed and release lock

I need to block database reads on row level while I'm executing update logic for same row. How would nice and clean solution look like? Here is some code I'm working on:
using (SqlConnection conn = new SqlConnection(Configuration.ConnectionString)) {
conn.Open();
using (SqlTransaction tran = conn.BeginTransaction(IsolationLevel.Serializable)) {
//TODO lock table row with ID=primaryKey (block other reads and updates)
using (SqlCommand cmd = new SqlCommand("SELECT Data FROM MyTable WHERE ID=#primaryKey", conn)) {
cmd.Parameters.AddWithValue("#primaryKey", primaryKey);
using (var reader = cmd.ExecuteReader()) {
data = PopulateData(reader);
};
}
if (isUpdateNeeded(data)) {
ChangeExternalSystemStateAndUpdateData(data) //EDIT - concurrent calls not allowed
WriteUpdatesToDatabase(conn, tran, data); //EDIT
tran.Commit();
}
} //realease lock and allow other processes to read row with ID=primaryKey
}
EDIT:
I have following constraints:
Code can be executed within different App pools simultaneously. So memory lock is not an option
ChangeExternalSystemStateAndUpdateData() must be executed only once in the scope of the MyTable row. Concurrent calls will cause problems
Usually the problem here isn't so much row locking, but rather: other SPIDs acquiring read locks between your read and your update - which can lead to deadlock scenarios; the common fix here is to acquire a write lock in the initial read:
SELECT Data FROM MyTable WITH (UPDLOCK) WHERE ID=#primaryKey
Note that if another SPID is explicitly using NOLOCK then they'll still blitz past it.
You could also try adding ROWLOCK, i.e. WITH (UPDLOCK, ROWLOCK) - personally I'd keep it simple initially. Since you're in a serializable context, trying to be too granular may be a lost cause - key-range locks, etc.

How can I prevent becoming a deadlock victim?

I got this exception msg:
Transaction (Process ID 55) was deadlocked on lock resources with another process and has been chosen as the deadlock victim.
The only line of my code that was implicated in the Stack Trace was the last one here:
public static DataTable ExecuteSQLReturnDataTable(string sql, CommandType cmdType, params SqlParameter[] parameters)
{
using (DataSet ds = new DataSet())
using (SqlConnection connStr = new SqlConnection(CPSConnStr))
using (SqlCommand cmd = new SqlCommand(sql, connStr))
{
cmd.CommandType = cmdType;
cmd.CommandTimeout = EXTENDED_TIMEOUT;
foreach (var item in parameters)
{
cmd.Parameters.Add(item);
}
try
{
cmd.Connection.Open();
new SqlDataAdapter(cmd).Fill(ds);
This is a general-purpose method that I use for all sorts of queries; I haven't changed it recently, nor have I ever seen this particular exception before.
What can I do to guard against this exception being thrown again?
You can catch the deadlock exception and retry X number of times before giving up.
There's no magic solution to avoid deadlocks. If SQL Server detects a deadlock it's going to pick one of the processes to kill. In some cases you may have had deadlocks where your process was the one that was lucky enough to continue.
You can use SQL Profiler to capture the deadlocks. I had to do this in the past to try and figure out what was actually causing the deadlocks. The less often this happens the harder it is to track down. In our test environment we just created some testing code to hammer the database from a few different machines to try and cause a deadlock.
In our case we made some changes to our indexes and modified database triggers to reduce the deadlocks as best we could. In the end we still had to implement the retries as a "just in case".
It might have helped if you had shown the SQL that was passed to ExecuteSQLReturnDataTable. Meanwhile read Minimizing Deadlocks.
Of course, you may have to also look at whatever else is contributing to the deadlock.

Sloq SQL execution (calling store procedures) on huge site

First of all, I realize that my question MAY be broad, please bear with me as I've been thinking on how to form it for a month and I still am not 100% sure how to express my issues.
I'm currently developing a website, that will be used by many thousands of users daily. The bottle neck is the communication with the Data Base.
Each and every conversation with the tables is done through stored procedures, whose calls look like this:
public void storedProcedure(int id, out DataSet ds)
{
ds = new DataSet("resource");
SqlDataReader objReader = null;
SqlCommand cmd = new SqlCommand("storedProcedure", DbConn.objConn);
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add(new SqlParameter("#id", id));
openConnection(cmd);
SqlDataAdapter objDataAdapter = new SqlDataAdapter();
objDataAdapter.SelectCommand = cmd;
objDataAdapter.Fill(ds);
cmd.Connection.Close();
}
Or
public void anotherStoredProcedure(int var1, int var2, int var3, int var4, string var5, out DataSet ds)
{
ds = new DataSet("ai");
SqlCommand cmd = new SqlCommand("anotherStoredProcedure", DbConn.objConn);
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add(new SqlParameter("#var1", var1));
cmd.Parameters.Add(new SqlParameter("#var2", var2));
cmd.Parameters.Add(new SqlParameter("#var3", var3));
cmd.Parameters.Add(new SqlParameter("#var4", var4));
cmd.Parameters.Add(new SqlParameter("#var5", var5));
openConnection(cmd);
SqlDataAdapter objDataAdapter = new SqlDataAdapter();
objDataAdapter.SelectCommand = cmd;
objDataAdapter.Fill(ds);
cmd.Connection.Close();
}
My objConn is defined as following:
public static string DatabaseConnectionString = System.Configuration.ConfigurationManager.ConnectionStrings["objConnLocal"].ConnectionString;
public static SqlConnection objConn = new SqlConnection(DatabaseConnectionString);
And ofcourse in web.config I have
<add name="objConnLocal" connectionString="Initial Catalog=t1;Data Source=1.2.3.4;Uid=id;pwd=pwd;Min Pool Size=20;Max Pool Size=200;" providerName="SQLOLEDB.1"/>
Now the issue is: On every page_load there a few sp calls (above), and when the user starts navigating through the page, more calls are made.
At the moment only the developing and testing team are on the site and at times the speed is really slow. Frequently it would keep loading till it times out (err 504).
Another problem (only ever now and then, but certainly frequent enough to be noticeable) on first user login is it would keep trying to run a call but the connection would claim to be opened, even though it shouldn't be. A fairly not-working work-around is
private void openConnection(SqlCommand cmd){
if (cmd.Connection.State != ConnectionState.Closed)
{
cmd.Connection.Close();
}
if (cmd.Connection.State == ConnectionState.Open)
{
cmd.Connection.Close();
}
try
{
cmd.Connection.Open();
}
catch (Exception ex)
{
System.Threading.Thread.Sleep(1000);
HttpContext.Current.Response.Redirect("/");
}
}
Which makes connecting slow but at least doesn't show the YSOD.
So, what am I doing wrong on my SQL calls so that it is so slow for only 5-10 users? What I have so far:
I've read on Stack Overflow that using "using" is quite nice, but am not entirely sure why and how come as it was a single line comment under an answer. Another idea for improvement was to use several connection strings and not only one.
Resolved:
Changing the wait the connection is established in the connection string from username/pwd to Integrated Security resolved the issue. IF anyone's having similar issue refer to http://www.codeproject.com/Articles/17768/ADO-NET-Connection-Pooling-at-a-Glance
You're right - it's a broad question!
For context - many "thousands of users daily" isn't huge from a performance point of view. A well-built ASP.Net application can typically support hundreds of concurrent users on a decently specified developer laptop; assuming 10K users per day, you probably only have a few dozen concurrent users at peak times (of course this depends entirely on the application domain).
The first thing to do is to use a profiler on your running code to see where the performance bottleneck is. This is available in VS, and there are several 3rd party solutions (I like RedGate and JetBrains).
The profiler will tell you where your code is slow - it should be pretty obvious if it's taking seconds for pages to render.
At first glance, it looks like you have a problem with the database. So you can also use the SQLServer activity monitor to look at long-running queries.
Now the issue is: On every page_load there a few sp calls (above), and
when the user starts navigating trough the page, more calls are made.
This sounds like you've written webpages which won't display anything until the Stored Procedure calls have completed. This is never a good idea.
Shift these SP calls into a background thread, so the user at least sees something when they go onto the webpage (like a "Please wait" message). This can also help prevent timeout messages.
One other thing: you don't say why your SPs take so long to run.
If you're dealing with lots of records, its worth running a SQL script (described on the link below) to check for missing SQL Server indexes.
Finding missing indexes
This script shows the missing indexes which have made the most impact to your users, also tells you the syntax of the CREATE INDEX command you'd need to run to add those indexes.

How to rollback multiple Queries on different database servers in case of any error

I am using different SQL procedures in an application.
First procedures insert some rows then some processing in C#code and then 2nd procedure
do some updation then again some code processing then third procedure delete some record and then insert new record. When all is done on Sever 1 then data is fetch from this server and sent to Server 2 there record is deleted and new record is inserted.
IF there is error at any stage on any server in any procedure i want to roll back all the record.
I can not use begin trans because processing takes time and can not block table as others users are also using same tables in parallel. So kindly tell me how can i achieve it without blocking the table for other users.
Thanks in advance.
Edited (Added code example):
I tried Transaction Scope but i am getting exception while opening the connection. I configured MS DTC but may be not configured properly.
"
Network access for Distributed Transaction Manager (MSDTC) has been disabled. Please enable DTC for network access in the security configuration for MSDTC using the Component Services Administrative tool."
using (TransactionScope ts = new TransactionScope(TransactionScopeOption.Required))
{
try
{
dl.SetBookReadyToLive(13570, false);
//SetBookReadyToLive
dl.AddTestSubmiitedTitleID(23402);
dl.AddBookAuthorAtLIve(13570, 1);
ts.Complete();
}
catch (Exception ex)
{
Response.Write(ex.Message);
}
}
public void SetBookReadyToLive(long BookID, bool status)
{
try
{
if (dbConMeta.State != ConnectionState.Open)
dbConMeta.Open();
SqlCommand cmd = new SqlCommand("spSetBookReadyToLive", dbConMeta);
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Clear();
cmd.Parameters.Add("#BookID", BookID);
cmd.Parameters.Add("#status", status);
cmd.ExecuteNonQuery();
if (dbConMeta.State == ConnectionState.Open)
dbConMeta.Close();
}
catch
{
if (dbConMeta.State == ConnectionState.Open)
dbConMeta.Close();
}
}
I get the exception on opening the connection of method>
I am using SQL Server 2000, i have set the configuration of MS DTC on the machine where SQL Server is installed and also on my PC from where i am running the code. But still same exception.
Kindly help me to configure it
You can use the TransactionScope class. It works generally well but in case of distributed SQL servers like in your case requires the MS DTC enabled in both servers and configured properly (security has to be granted for execution of network transactions, distributed ones and so on...)
here a copy paste from an example on MSDN, you could "almost" use it like this... :)
// Create the TransactionScope to execute the commands, guaranteeing
// that both commands can commit or roll back as a single unit of work.
using (TransactionScope scope = new TransactionScope())
{
using (SqlConnection connection1 = new SqlConnection(connectString1))
{
// Opening the connection automatically enlists it in the
// TransactionScope as a lightweight transaction.
connection1.Open();
// Create the SqlCommand object and execute the first command.
SqlCommand command1 = new SqlCommand(commandText1, connection1);
returnValue = command1.ExecuteNonQuery();
writer.WriteLine("Rows to be affected by command1: {0}", returnValue);
// If you get here, this means that command1 succeeded. By nesting
// the using block for connection2 inside that of connection1, you
// conserve server and network resources as connection2 is opened
// only when there is a chance that the transaction can commit.
using (SqlConnection connection2 = new SqlConnection(connectString2))
{
// The transaction is escalated to a full distributed
// transaction when connection2 is opened.
connection2.Open();
// Execute the second command in the second database.
returnValue = 0;
SqlCommand command2 = new SqlCommand(commandText2, connection2);
returnValue = command2.ExecuteNonQuery();
writer.WriteLine("Rows to be affected by command2: {0}", returnValue);
}
}
// The Complete method commits the transaction. If an exception has been thrown,
// Complete is not called and the transaction is rolled back.
scope.Complete();
}
source: TransactionScope Class
to minimize locks you could specify the IsolationLevel with the overload of the constructor which takes a TransactionScopeOptions, default is Serializable if you are fine with that you could set it to ReadCommitted.
Note: Personally I would not use this one unless absolutely needed, because it's a bit of a pain to have the DTC always configured and Distributed Transactions are in general slower than local ones but really depends on your BL / DAL logic.
Short answer : The same way you would do it if you would do it in MS SQL Management Studio.
You open a connection to a server.
Open a transaction for a specific server
You run your queries related to this server
You make sure to keep your connection alive while you... [go back to 1. for next server]
If all your queries worked, commit all your changes.
Else, rollback all your queries.
Warning : The first table will most likely be locked until you're done with all your servers/queries. What you could do here to help this : If you got a lot of data, you can transfer the data to temporary tables on every servers before doing the step #2. Once this is done, you open the transaction, do your fast things, then commit/rollback asap.
Note: I know you asked how to achieve this without locking the tables, hence why I added an idea in the « warning » part.

Disable read/write to a table via SqlTransaction in .net?

How to use SqlTransaction in .net 2.0 so that when I start reading data from a table, that table is blocked for others (other programs) to read/write to that table?
If SqlTransaction is not a good option, than what is?
This should be allowed by using Serializable transaction together with TABLOCKX hint in initial select statement. TABLOCKX should take exclusive lock on the table so nobody else can use that table and Serializable transaction should demand HOLDLOCK which means that all locks are kept until end of the transaction (you can use HOLDLOCK directly).
Update: I just tested different scenarios in Management studio and it
looks like you do not need to
explicitly use Serializable
transaction. Using TABLOCKX within any
transaction is enough.
Be aware that such approach can be big bottleneck because only one transaction can operate on such table = no concurrency. Even if you read and work with single record from million nobody else will be able to work with the table until your transaction ends.
So the command should look like:
SELECT * FROM Table WITH (TABLOCKX) WHERE ...
To use serializable transaction you can use SqlTransaction:
using (SqlConnection connection = new SqlConnection(connectionString))
{
connection.Open();
SqlTransaction transaction = connection.BeginTransaction(IsolationLevel.Serializable);
try
{
...
transaction.Commit();
}
catch (Exception)
{
transaction.Rollback();
...
}
}
Or System.Transactions.TransactionScope (default isolation level should be Serializable).
using (TransactionScope scope = new TransactionScope())
{
using (SqlConnection connection = new SqlConnection(connectionString))
{
...
}
scope.Complete();
}

Categories

Resources