Background: I'm writing a function putting long lasting operations in a queue, using C#,
and each operation is kind of divided into 3 steps:
1. database operation (update/delete/add data)
2. long time calculation using web service
3. database operation (save the calculation result of step 2) on the same db table in step 1, and check the consistency of the db table, e.g., the items are the same in step 1 (Pls see below for a more detailed example)
In order to avoid dirty data or corruptions, I use a lock object (a static singleton object) to ensure the 3 steps to be done as a whole transaction. Because when multiple users are calling the function to do operations, they may modify the same db table at different steps during their own operations without this lock, e.g., user2 is deleting item A in his step1, while user1 is checking if A still exists in his step 3. (additional info: Meanwhile I'm using TransactionScope from Entity framework to ensure each database operation as a transaction, but as repeat readable.)
However, I need to put this to a cloud computing platform which uses load balancing mechanism, so actually my lock object won't take effect, because the function will be deployed on different servers.
Question: what can I do to make my lock object working under above circumstance?
This is a tricky problem - you need a distributed lock, or some sort of shared state.
Since you already have the database, you could change your implementation from a "static C# lock" and instead the database to manage your lock for you over the whole "transaction".
You don't say what database you are using, but if it's SQL Server, then you can use an application lock to achieve this. This lets you explicitly "lock" an object, and all other clients will wait until that object is unlocked. Check out:
http://technet.microsoft.com/en-us/library/ms189823.aspx
I've coded up an example implementation below. Start two instances to test it out.
using System;
using System.Data;
using System.Data.SqlClient;
using System.Transactions;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
var locker = new SqlApplicationLock("MyAceApplication",
"Server=xxx;Database=scratch;User Id=xx;Password=xxx;");
Console.WriteLine("Aquiring the lock");
using (locker.TakeLock(TimeSpan.FromMinutes(2)))
{
Console.WriteLine("Lock Aquired, doing work which no one else can do. Press any key to release the lock.");
Console.ReadKey();
}
Console.WriteLine("Lock Released");
}
class SqlApplicationLock : IDisposable
{
private readonly String _uniqueId;
private readonly SqlConnection _sqlConnection;
private Boolean _isLockTaken = false;
public SqlApplicationLock(
String uniqueId,
String connectionString)
{
_uniqueId = uniqueId;
_sqlConnection = new SqlConnection(connectionString);
_sqlConnection.Open();
}
public IDisposable TakeLock(TimeSpan takeLockTimeout)
{
using (TransactionScope transactionScope = new TransactionScope(TransactionScopeOption.Suppress))
{
SqlCommand sqlCommand = new SqlCommand("sp_getapplock", _sqlConnection);
sqlCommand.CommandType = CommandType.StoredProcedure;
sqlCommand.CommandTimeout = (int)takeLockTimeout.TotalSeconds;
sqlCommand.Parameters.AddWithValue("Resource", _uniqueId);
sqlCommand.Parameters.AddWithValue("LockOwner", "Session");
sqlCommand.Parameters.AddWithValue("LockMode", "Exclusive");
sqlCommand.Parameters.AddWithValue("LockTimeout", (Int32)takeLockTimeout.TotalMilliseconds);
SqlParameter returnValue = sqlCommand.Parameters.Add("ReturnValue", SqlDbType.Int);
returnValue.Direction = ParameterDirection.ReturnValue;
sqlCommand.ExecuteNonQuery();
if ((int)returnValue.Value < 0)
{
throw new Exception(String.Format("sp_getapplock failed with errorCode '{0}'",
returnValue.Value));
}
_isLockTaken = true;
transactionScope.Complete();
}
return this;
}
public void ReleaseLock()
{
using (TransactionScope transactionScope = new TransactionScope(TransactionScopeOption.Suppress))
{
SqlCommand sqlCommand = new SqlCommand("sp_releaseapplock", _sqlConnection);
sqlCommand.CommandType = CommandType.StoredProcedure;
sqlCommand.Parameters.AddWithValue("Resource", _uniqueId);
sqlCommand.Parameters.AddWithValue("LockOwner", "Session");
sqlCommand.ExecuteNonQuery();
_isLockTaken = false;
transactionScope.Complete();
}
}
public void Dispose()
{
if (_isLockTaken)
{
ReleaseLock();
}
_sqlConnection.Close();
}
}
}
}
Related
I'm designing a small desktop app that fetches data from SQL server. I used BackgroundWorker to make the query execute in background. The code that fetches data generally comes down to this:
public static DataTable GetData(string sqlQuery)
{
DataTable t = new DataTable();
using (SqlConnection c = new SqlConnection(GetConnectionString()))
{
c.Open();
using (SqlCommand cmd = new SqlCommand(sqlQuery))
{
cmd.Connection = c;
using (SqlDataReader r = cmd.ExecuteReader())
{
t.Load(r);
}
}
}
return t;
}
Since query can take up 10-15 minutes I want to implement cancellation request and pass it from GUI layer to DAL. Cancellation procedure of BackroundWorker won't let me cancel SqlCommand.ExecuteReader() beacuse it only stops when data is fetched from server or an exception is thrown by Data Provider.
I tried to use Task and async/await with SqlCommand.ExecuteReaderAsync(CancellationToken) but I am confused where to use it in multi-layer app (GUI -> BLL -> DAL).
Have you tried using the SqlCommand.Cancel() method ?
Aproach: encapsulate that GetData method in a Thread/Worker and then when you cancel/stop that thread call the Cancel() method on the SqlCommand that is being executed.
Here is an example on how to use it on a thread
using System;
using System.Data;
using System.Data.SqlClient;
using System.Threading;
class Program
{
private static SqlCommand m_rCommand;
public static SqlCommand Command
{
get { return m_rCommand; }
set { m_rCommand = value; }
}
public static void Thread_Cancel()
{
Command.Cancel();
}
static void Main()
{
string connectionString = GetConnectionString();
try
{
using (SqlConnection connection = new SqlConnection(connectionString))
{
connection.Open();
Command = connection.CreateCommand();
Command.CommandText = "DROP TABLE TestCancel";
try
{
Command.ExecuteNonQuery();
}
catch { }
Command.CommandText = "CREATE TABLE TestCancel(co1 int, co2 char(10))";
Command.ExecuteNonQuery();
Command.CommandText = "INSERT INTO TestCancel VALUES (1, '1')";
Command.ExecuteNonQuery();
Command.CommandText = "SELECT * FROM TestCancel";
SqlDataReader reader = Command.ExecuteReader();
Thread rThread2 = new Thread(new ThreadStart(Thread_Cancel));
rThread2.Start();
rThread2.Join();
reader.Read();
System.Console.WriteLine(reader.FieldCount);
reader.Close();
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
}
static private string GetConnectionString()
{
// To avoid storing the connection string in your code,
// you can retrieve it from a configuration file.
return "Data Source=(local);Initial Catalog=AdventureWorks;"
+ "Integrated Security=SSPI";
}
}
You can only do Cancelation checking and Progress Reporting between Distinct lines of code. Usually both require that you disect the code down to the lowest loop level, so you can do both these things between/in the loop itterations. When I wrote my first step into BGW, I had the advantage that I needed to do the loop anyway so it was no extra work. You have one of the worse cases - pre-existing code that you can only replicate or use as is.
Ideal case:
This operation should not take nearly as long is it does. 5-10 minutes indicates that there is something rather wrong with your design.
If the bulk of the time is transmission of data, then you are propably retreiving way to much data. Retrieving everything to do filtering in the GUI is a very common mistake. Do as much filtering in the query as possible. Usign a Distributed Database might also help with transmission performance.
If the bulk of the time is processing as part of the query operation (complex Conditions), something in your general approach might have to change. There are various ways to trade off complex calculation with a bit of memory on the DBMS side. Views afaik can cache the results of operations, while still maintaining transactional consistency.
But it really depends what your backend DB/DBMS and use case are. A lot of the use SQL as Query Language. So it does not allow us to predict wich options you have.
Second best case:
The second best thing if you can not cut it down, would be if you had the actually DB access code down to the lowest loop and would do progress reporting/cancelation checking on it. That way you could actually use the existing Cancelation Token System inherent in BGW.
Everything else
Using any other approach to Cancelation is really a fallback. I wrote a lot on why it is bad, but felt that this might work better if I focus on the core issue - likely something wrong in design of he DB and/or Query. Because those might well eliminate the issue altogether.
I am working on a setup where a scalable WCF Service Component is connected to a single MS SQL Server Database. The RESTful service allows users to save data into the DB as well as get data from it.
Whilst implementing a class handling the database connections / methods, I started struggling with correctly reusing prepared SqlCommands and the connection. I read up on the MSDN about connection pooling as well as how to use SqlCommand and SqlParameter.
My initial version of the class looks like this:
public class SqlRepository : IDisposable
{
private object syncRoot = new object();
private SqlConnection connection;
private SqlCommand saveDataCommand;
private SqlCommand getDataCommand;
public SqlRepository(string connectionString)
{
// establish sql connection
connection = new SqlConnection(connectionString);
connection.Open();
// save data
saveDataCommand = new SqlCommand("INSERT INTO Table (Operation, CustomerId, Data, DataId, CreationDate, ExpirationDate) VALUES (#Operation, #CustomerId, #Data, #DataId, #CreationDate, #ExpirationDate)", connection);
saveDataCommand.Parameters.Add(new SqlParameter("Operation", SqlDbType.NVarChar, 20));
saveDataCommand.Parameters.Add(new SqlParameter("CustomerId", SqlDbType.NVarChar, 50));
saveDataCommand.Parameters.Add(new SqlParameter("Data", SqlDbType.NVarChar, 50));
saveDataCommand.Parameters.Add(new SqlParameter("DataId", SqlDbType.NVarChar, 50));
saveDataCommand.Parameters.Add(new SqlParameter("CreationDate", SqlDbType.DateTime));
saveDataCommand.Parameters.Add(new SqlParameter("ExpirationDate", SqlDbType.DateTime));
saveDataCommand.Prepare();
// get data
getTripCommand = new SqlCommand("SELECT TOP 1 Data FROM Table WHERE CustomerId = #CustomerId AND DataId = #DataId AND ExpirationDate > #ExpirationDate ORDER BY CreationDate DESC", connection);
getTripCommand.Parameters.Add(new SqlParameter("CustomerId", SqlDbType.NVarChar, 50));
getTripCommand.Parameters.Add(new SqlParameter("DataId", SqlDbType.NVarChar, 50));
getTripCommand.Parameters.Add(new SqlParameter("ExpirationDate", SqlDbType.DateTime));
getTripCommand.Prepare();
}
public void SaveData(string customerId, string dataId, string operation, string data, DateTime expirationDate)
{
lock (syncRoot)
{
saveDataCommand.Parameters["Operation"].Value = operation;
saveDataCommand.Parameters["CustomerId"].Value = customerId;
saveDataCommand.Parameters["CreationDate"].Value = DateTime.UtcNow;
saveDataCommand.Parameters["ExpirationDate"].Value = expirationDate;
saveDataCommand.Parameters["Data"].Value = data;
saveDataCommand.Parameters["DataId"].Value = dataId;
saveDataCommand.ExecuteNonQuery();
}
}
public string GetData(string customerId, string dataId)
{
lock (syncRoot)
{
getDataCommand.Parameters["CustomerId"].Value = customerId;
getDataCommand.Parameters["DataId"].Value = dataId;
getDataCommand.Parameters["ExpirationDate"].Value = DateTime.UtcNow;
using (var reader = getDataCommand.ExecuteReader())
{
if (reader.Read())
{
string data = reader.GetFieldValue<string>(0);
return data;
}
else
{
return null;
}
}
}
}
public void Dispose()
{
try
{
if (connection != null)
{
connection.Close();
connection.Dispose();
}
DisposeCommand(saveDataCommand);
DisposeCommand(getDataCommand);
}
catch { }
}
private void DisposeCommand(SqlCommand command)
{
try
{
command.Dispose();
}
catch (Exception)
{
}
}
}
There are several aspects important to know:
I am using SqlCommand.Prepare() to speed up the process of executing the command
Reusing the commands avoids creating new objects with every call to the GetData and SaveData methods, thus leading to no problem with the garbage collector
There is only one instance of the SqlRepository class, used by the WCF Service.
There are many many calls per minute to this service, so keeping a connection to the DB open is what I want.
Now I read up a bit more about connection pooling and the fact that it is highly recommended to use the SqlConnection object in a using statement to ensure disposal. To my understanding, the connection pooling technology takes care of leaving the connection open even though the Dispose() method of SqlConnection has been called by the using statement.
The way to use this would be to have a using(SqlConnection connection = new SqlConnection(connectionString)) inside the GetData and SaveData methods. However, then - at least to my intuition - I would need to create the SqlCommands inside the GetData / SaveData methods as well. Or not? I could not find any documentation on how to reuse the commands that way. Also wouldn't the call to SqlCommand.Prepare() be meaningless if I need to prepare a new command every time I get into the GetData / SaveData methods?
How do I properly implement the SqlRepository class? The way it is now I believe that if the connection breaks (maybe because the DB server goes down for a while and reboots), then the SqlRepository class will not automatically recover and be functioning. To my best knowledge this sort of failsave scenarios are handled in the pooling technology.
Thanks for ideas and feedback!
Christian
Do not reuse the SqlCommand instances.
You are synchronizing database access.
With your implementation, you are re-using a small object (which is no problem for the GC even if there are thousands) in exchange of concurrent DB operations.
Remove the synchronization locks.
Create new instances of SqlCommands for each database operation.
Do not call Prepare. Prepare speeds up db operations, but after executing ExecuteReader() on a SqlCommand with CommandType = Text and with non-zero number of parameters, the command is unprepared internally.
I have a large application based on Dynamics CRM 2011 that in various places has code that must query for a record based upon some criteria and create it if it doesn't exist else update it.
An example of the kind of thing I am talking about would be similar to this:
stk_balance record = context.stk_balanceSet.FirstOrDefault(x => x.stk_key == id);
if(record == null)
{
record = new stk_balance();
record.Id = Guid.NewGuid();
record.stk_value = 100;
context.AddObject(record);
}
else
{
record.stk_value += 100;
context.UpdateObject(record);
}
context.SaveChanges();
In terms of CRM 2011 implementation (although not strictly relevant to this question) the code could be triggered from synchronous or asynchronous plugins. The issue is that the code is not thread safe, between checking if the record exists and creating it if it doesn't, another thread could come in and do the same thing first resulting in duplicate records.
Normal locking methods are not reliable due to the architecture of the system, various services using multiple threads could all be using the same code, and these multiple services are also load balanced across multiple machines.
In trying to find a solution to this problem that doesn't add massive amounts of extra complexity and doesn't compromise the idea of not having a single point of failure or a single point where a bottleneck could occur I came across the idea of using SQL Server application locks.
I came up with the following class:
public class SQLLock : IDisposable
{
//Lock constants
private const string _lockMode = "Exclusive";
private const string _lockOwner = "Transaction";
private const string _lockDbPrincipal = "public";
//Variable for storing the connection passed to the constructor
private SqlConnection _connection;
//Variable for storing the name of the Application Lock created in SQL
private string _lockName;
//Variable for storing the timeout value of the lock
private int _lockTimeout;
//Variable for storing the SQL Transaction containing the lock
private SqlTransaction _transaction;
//Variable for storing if the lock was created ok
private bool _lockCreated = false;
public SQLLock (string lockName, int lockTimeout = 180000)
{
_connection = Connection.GetMasterDbConnection();
_lockName = lockName;
_lockTimeout = lockTimeout;
//Create the Application Lock
CreateLock();
}
public void Dispose()
{
//Release the Application Lock if it was created
if (_lockCreated)
{
ReleaseLock();
}
_connection.Close();
_connection.Dispose();
}
private void CreateLock()
{
_transaction = _connection.BeginTransaction();
using (SqlCommand createCmd = _connection.CreateCommand())
{
createCmd.Transaction = _transaction;
createCmd.CommandType = System.Data.CommandType.Text;
StringBuilder sbCreateCommand = new StringBuilder();
sbCreateCommand.AppendLine("DECLARE #res INT");
sbCreateCommand.AppendLine("EXEC #res = sp_getapplock");
sbCreateCommand.Append("#Resource = '").Append(_lockName).AppendLine("',");
sbCreateCommand.Append("#LockMode = '").Append(_lockMode).AppendLine("',");
sbCreateCommand.Append("#LockOwner = '").Append(_lockOwner).AppendLine("',");
sbCreateCommand.Append("#LockTimeout = ").Append(_lockTimeout).AppendLine(",");
sbCreateCommand.Append("#DbPrincipal = '").Append(_lockDbPrincipal).AppendLine("'");
sbCreateCommand.AppendLine("IF #res NOT IN (0, 1)");
sbCreateCommand.AppendLine("BEGIN");
sbCreateCommand.AppendLine("RAISERROR ( 'Unable to acquire Lock', 16, 1 )");
sbCreateCommand.AppendLine("END");
createCmd.CommandText = sbCreateCommand.ToString();
try
{
createCmd.ExecuteNonQuery();
_lockCreated = true;
}
catch (Exception ex)
{
_transaction.Rollback();
throw new Exception(string.Format("Unable to get SQL Application Lock on '{0}'", _lockName), ex);
}
}
}
private void ReleaseLock()
{
using (SqlCommand releaseCmd = _connection.CreateCommand())
{
releaseCmd.Transaction = _transaction;
releaseCmd.CommandType = System.Data.CommandType.StoredProcedure;
releaseCmd.CommandText = "sp_releaseapplock";
releaseCmd.Parameters.AddWithValue("#Resource", _lockName);
releaseCmd.Parameters.AddWithValue("#LockOwner", _lockOwner);
releaseCmd.Parameters.AddWithValue("#DbPrincipal", _lockDbPrincipal);
try
{
releaseCmd.ExecuteNonQuery();
}
catch {}
}
_transaction.Commit();
}
}
I would use this in my code to create a SQL Server application lock using the unique key I am querying for as the lock name like this
using (var sqlLock = new SQLLock(id))
{
//Code to check for and create or update record here
}
Now this approach seems to work, however I am by no means any kind of SQL Server expert and am wary about putting this anywhere near production code.
My question really has 3 parts
1. Is this a really bad idea because of something I haven't considered?
Are SQL Server application locks completely unsuitable for this purpose?
Is there a maximum number of application locks (with different names) you can have at a time?
Are there performance considerations if a potentially large number of locks are created?
What else could be an issue with the general approach?
2. Is the solution actually implemented above any good?
If SQL Server application locks are usable like this, have I actually used them properly?
Is there a better way of using SQL Server to achieve the same result?
In the code above I am getting a connection to the Master database and creating the locks in there. Does that potentially cause other issues? Should I create the locks in a different database?
3. Is there a completely alternative approach that could be used that doesn't use SQL Server application locks?
I can't use stored procedures to create and update the record (unsupported in CRM 2011).
I don't want to add a single point of failure.
You can do this much easier.
//make sure your plugin runs within a transaction, this is the case for stage 20 and 40
//you can check this with IExecutionContext.IsInTransaction
//works not with offline plugins but works within CRM Online (Cloud) and its fully supported
//also works on transaction rollback
var lockUpdateEntity = new dummy_lock_entity(); //simple technical entity with as many rows as different lock barriers you need
lockUpdateEntity.Id = Guid.parse("well known guid"); //well known guid for this barrier
lockUpdateEntity.dummy_field=Guid.NewGuid(); //just update/change a field to create a lock, no matter of its content
//--------------- this is untested by me, i use the next one
context.UpdateObject(lockUpdateEntity);
context.SaveChanges();
//---------------
//OR
//--------------- i use this one, but you need a reference to your OrganizationService
OrganizationService.Update(lockUpdateEntity);
//---------------
//threads wait here if they have no lock for dummy_lock_entity with "well known guid"
stk_balance record = context.stk_balanceSet.FirstOrDefault(x => x.stk_key == id);
if(record == null)
{
record = new stk_balance();
//record.Id = Guid.NewGuid(); //not needed
record.stk_value = 100;
context.AddObject(record);
}
else
{
record.stk_value += 100;
context.UpdateObject(record);
}
context.SaveChanges();
//let the pipeline flow and the transaction complete ...
For more background info refer to http://www.crmsoftwareblog.com/2012/01/implementing-robust-microsoft-dynamics-crm-2011-auto-numbering-using-transactions/
I have two method “ExecuteNoQuery” (performs dbCommand.ExecuteNonQuery()) and “Query” performs (dbCommand.ExecuteReader()).
Both the methods are using same connection object. In ExecuteNoQuery method a lock is implemented(using connection object) and Query method implemented with out lock. In case of multiple thred, different thread accessing both the method simultaneously then what will happen?
Note: In Query method custom connection pooling is implemented with the same object.
public int ExecuteNoQuery(string sqlquery, Hashtable htData) {
try {
lock(Myservice.dbcon)
{
using (OracleCommand dbCommand = new OracleCommand(sqlquery, Myservice.dbcon))
{
int rowCount = dbCommand.ExecuteNonQuery();
return 1;
}
}
}
public OracleDataReader Query(string sqlquery, Hashtable htData)
{
try
{
OracleDataReader dbReader = null;
Random ran = new Random();
int randomnumber = ran.Next(1,5);
Myservice.dbcon = (OracleConnection) Myservice.htdbcon
["Connection_" +randomnumber];
if (Myservice.dbcon.State != System.Data.ConnectionState.Executing
|| Myservice.dbcon != System.Data.ConnectionState.Fetching)
{
using (OracleCommand dbCommand = new OracleCommand(sqlquery,
Myservice.dbcon))
{
dbReader = dbCommand.ExecuteReader();
}
}
return dbReader;
}
Both the methods are using same connection object.
Since one method uses a lock and the other does not: bad things. No guarantees are made by the object for this scenario, so you should expect it to fail in interesting ways. You should use the same lock object from both places, or better: only use a connection in isolated code, not a shared connection. With connection pooling, it is very rarely useful to have a shared connection object somewhere. A far more suitable pattern is usually to obtain a connection when you need it, and then dispose it. If the underlying provider supports pooling, this will perform ideally, without any issues of synchronization, and will allow parallel queries etc. For example:
using (var conn = SomeUtilityClass.GetOpenConnection())
using (var cmd = conn.CreateCommand())
{
cmd.CommandText = sqlquery;
int rowCount = dbCommand.ExecuteNonQuery();
return 1;
}
and, importantly, do the same from the Query method; no locks, no global shared connections.
I'd also be concerned by the lack of parameters, btw. That suggests you are opening yourself up to SQL injection errors.
I feel like this behavior should not be happening. Here's the scenario:
Start a long-running sql transaction.
The thread that ran the sql command
gets aborted (not by our code!)
When the thread returns to managed
code, the SqlConnection's state is
"Closed" - but the transaction is
still open on the sql server.
The SQLConnection can be re-opened,
and you can try to call rollback on
the transaction, but it has no
effect (not that I would expect this behavior. The point is there is no way to access the transaction on the db and roll it back.)
The issue is simply that the transaction is not cleaned up properly when the thread aborts. This was a problem with .Net 1.1, 2.0 and 2.0 SP1. We are running .Net 3.5 SP1.
Here is a sample program that illustrates the issue.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Data.SqlClient;
using System.Threading;
namespace ConsoleApplication1
{
class Run
{
static Thread transactionThread;
public class ConnectionHolder : IDisposable
{
public void Dispose()
{
}
public void executeLongTransaction()
{
Console.WriteLine("Starting a long running transaction.");
using (SqlConnection _con = new SqlConnection("Data Source=<YourServer>;Initial Catalog=<YourDB>;Integrated Security=True;Persist Security Info=False;Max Pool Size=200;MultipleActiveResultSets=True;Connect Timeout=30;Application Name=ConsoleApplication1.vshost"))
{
try
{
SqlTransaction trans = null;
trans = _con.BeginTransaction();
SqlCommand cmd = new SqlCommand("update <YourTable> set Name = 'XXX' where ID = #0; waitfor delay '00:00:05'", _con, trans);
cmd.Parameters.Add(new SqlParameter("0", 340));
cmd.ExecuteNonQuery();
cmd.Transaction.Commit();
Console.WriteLine("Finished the long running transaction.");
}
catch (ThreadAbortException tae)
{
Console.WriteLine("Thread - caught ThreadAbortException in executeLongTransaction - resetting.");
Console.WriteLine("Exception message: {0}", tae.Message);
}
}
}
}
static void killTransactionThread()
{
Thread.Sleep(2 * 1000);
// We're not doing this anywhere in our real code. This is for simulation
// purposes only!
transactionThread.Abort();
Console.WriteLine("Killing the transaction thread...");
}
/// <summary>
/// The main entry point for the application.
/// </summary>
[STAThread]
static void Main(string[] args)
{
using (var connectionHolder = new ConnectionHolder())
{
transactionThread = new Thread(connectionHolder.executeLongTransaction);
transactionThread.Start();
new Thread(killTransactionThread).Start();
transactionThread.Join();
Console.WriteLine("The transaction thread has died. Please run 'select * from sysprocesses where open_tran > 0' now while this window remains open. \n\n");
Console.Read();
}
}
}
}
There is a Microsoft Hotfix targeted at .Net2.0 SP1 that was supposed to address this, but we obviously have newer DLL's (.Net 3.5 SP1) that don't match the version numbers listed in this hotfix.
Can anyone explain this behavior, and why the ThreadAbort is still not cleaning up the sql transaction properly? Does .Net 3.5 SP1 not include this hotfix, or is this behavior that is technically correct?
Since you're using SqlConnection with pooling, your code is never in control of closing the connections. The pool is. On the server side, a pending transaction will be rolled back when the connection is truly closed (socket closed), but with pooling the server side never sees a connection close. W/o the connection closing (either by physical disconnect at the socket/pipe/LPC layer or by sp_reset_connection call), the server cannot abort the pending transaction. So it really boils down to the fact that the connection does not get properly release/reset. I don't understand why you're trying to complicate the code with explicit thread abort dismissal and attempt to reopen a closed transaction (that will never work). You should simply wrap the SqlConnection in an using(...) block, the implied finally and connection Dispose will be run even on thread abort.
My recommendation would be to keep things simple, ditch the fancy thread abort handling and replace it with a plain 'using' block (using(connection) {using(transaction) {code; commit () }}.
Of course I assume you do not propagate the transaction context into a different scope in the server (you do not use sp_getbindtoken and friends, and you do not enroll in distributed transactions).
This little program shows that the Thread.Abort properly closes a connection and the transaction is rolled back:
using System;
using System.Data.SqlClient;
using testThreadAbort.Properties;
using System.Threading;
using System.Diagnostics;
namespace testThreadAbort
{
class Program
{
static AutoResetEvent evReady = new AutoResetEvent(false);
static long xactId = 0;
static void ThreadFunc()
{
using (SqlConnection conn = new SqlConnection(Settings.Default.conn))
{
conn.Open();
using (SqlTransaction trn = conn.BeginTransaction())
{
// Retrieve our XACTID
//
SqlCommand cmd = new SqlCommand("select transaction_id from sys.dm_tran_current_transaction", conn, trn);
xactId = (long) cmd.ExecuteScalar();
Console.Out.WriteLine("XactID: {0}", xactId);
cmd = new SqlCommand(#"
insert into test (a) values (1);
waitfor delay '00:01:00'", conn, trn);
// Signal readyness and wait...
//
evReady.Set();
cmd.ExecuteNonQuery();
trn.Commit();
}
}
}
static void Main(string[] args)
{
try
{
using (SqlConnection conn = new SqlConnection(Settings.Default.conn))
{
conn.Open();
SqlCommand cmd = new SqlCommand(#"
if object_id('test') is not null
begin
drop table test;
end
create table test (a int);", conn);
cmd.ExecuteNonQuery();
}
Thread thread = new Thread(new ThreadStart(ThreadFunc));
thread.Start();
evReady.WaitOne();
Thread.Sleep(TimeSpan.FromSeconds(5));
Console.Out.WriteLine("Aborting...");
thread.Abort();
thread.Join();
Console.Out.WriteLine("Aborted");
Debug.Assert(0 != xactId);
using (SqlConnection conn = new SqlConnection(Settings.Default.conn))
{
conn.Open();
// checked if xactId is still active
//
SqlCommand cmd = new SqlCommand("select count(*) from sys.dm_tran_active_transactions where transaction_id = #xactId", conn);
cmd.Parameters.AddWithValue("#xactId", xactId);
object count = cmd.ExecuteScalar();
Console.WriteLine("Active transactions with xactId {0}: {1}", xactId, count);
// Check count of rows in test (would block on row lock)
//
cmd = new SqlCommand("select count(*) from test", conn);
count = cmd.ExecuteScalar();
Console.WriteLine("Count of rows in text: {0}", count);
}
}
catch (Exception e)
{
Console.Error.Write(e);
}
}
}
}
This is a bug in Microsoft's MARS implementation. Disabling MARS in your connection string will make the problem go away.
If you require MARS, and are comfortable making your application dependent on another company's internal implementation, familiarize yourself with http://dotnet.sys-con.com/node/39040, break out .NET Reflector, and look at the connection and pool classes. You have to store a copy of the DbConnectionInternal property before the failure occurs. Later, use reflection to pass the reference to a deallocation method in the internal pooling class. This will stop your connection from lingering for 4:00 - 7:40 minutes.
There are surely other ways to force the connection out of the pool and to be disposed. Short of a hotfix from Microsoft, though, reflection seems to be necessary. The public methods in the ADO.NET API don't seem to help.