BackgroundService Await Task.Delay in infinite loop leaks SqlConnection objects hard - c#

I have code similar to this running in multiple BackgroundServices (.NET 7). After just a few days of runtime (obviously, delay is in minutes (to hours) between loops) leads to a massive Memory Leak in tens of thousands of dangling SqlConnection handles (probably all of them ever used are still referenced, even if properly disconnected from the DB). I'm using .NET's SqlClient 5.0.1.
MRE
using System;
using System.Threading;
using System.Threading.Tasks;
using System.Diagnostics;
using Microsoft.Data.SqlClient;
namespace Memleak;
static class Program
{
const string connectionString = "Server=lpc:localhost;"
+ "Integrated Security=True;Encrypt=False;"
+ "MultipleActiveResultSets=False;Pooling=False;";
static async Task Main(params string[] args)
{
using CancellationTokenSource cts = new();
// not to forget it running
cts.CancelAfter(TimeSpan.FromMinutes(15));
CancellationToken ct = cts.Token;
using Process process = Process.GetCurrentProcess();
long loop = 1;
while (true)
{
await ConnectionAsync(ct);
// this seems to be the issue (delay duration is irrelevant)
await Task.Delay(TimeSpan.FromMilliseconds(1), ct);
process.Refresh();
long workingSet = process.WorkingSet64;
Console.WriteLine("PID:{0} RUN:{1:N0} RAM:{2:N0}",
process.Id, loop, workingSet);
++loop;
}
}
private static async Task ConnectionAsync(CancellationToken ct = default)
{
using SqlConnection connection = new(connectionString);
await connection.OpenAsync(ct);
using SqlCommand command = connection.CreateCommand();
command.CommandText = "select cast(1 as bit);";
using SqlDataReader reader = await command.ExecuteReaderAsync(ct);
if (await reader.ReadAsync(ct))
{
_ = reader.GetBoolean(0);
}
}
}
Leak
These following command prompt commands show the leak:
// dotnet tool install --global dotnet-dump
dotnet-dump collect -p pid
dotnet-dump analyze dump_name.dmp
dumpheap -type Microsoft.Data.SqlClient.SqlConnection -stat
dumpheap -mt mtid
dumpobj objid
gcroot objid
Last command shows a huge list of System.Threading.CancellationTokenSource+CallbackNode for a SqlConnection object.
Question
Is this a bug or working as expected (and if so, why)? And is there any easy workaround except getting rid of all async code and just using Threads? I cannot use Timers since Delays are variable upon certain factors (when work is available, delays are shorter; when work is not, delays are longer).
A non-async version does not leak
using System;
using System.Threading;
using System.Threading.Tasks;
using System.Diagnostics;
using Microsoft.Data.SqlClient;
namespace NotMemleak;
static class Program
{
const string connectionString = "Server=lpc:localhost;" +
"Integrated Security=True;Encrypt=False;" +
"MultipleActiveResultSets=False;Pooling=False;";
static void Main(params string[] args)
{
using CancellationTokenSource cts = new();
// not to forget it running
cts.CancelAfter(TimeSpan.FromMinutes(15));
CancellationToken ct = cts.Token;
using Process process = Process.GetCurrentProcess();
long loop = 1;
while (loop < 1000)
{
Connection();
// this seems to be the issue (delay duration is irrelevant)
ct.WaitHandle.WaitOne(TimeSpan.FromMilliseconds(1));
// Thread.Sleep();
process.Refresh();
long workingSet = process.WorkingSet64;
Console.WriteLine("PID:{0} RUN:{1:N0} RAM:{2:N0}"
, process.Id, loop, workingSet);
++loop;
}
Console.WriteLine();
Console.WriteLine("(press any key to exit)");
Console.ReadKey(true);
}
private static void Connection()
{
using SqlConnection connection = new(connectionString);
connection.Open();
using SqlCommand command = connection.CreateCommand();
command.CommandText = "select cast(1 as bit);";
using SqlDataReader reader = command.ExecuteReader();
if (reader.Read())
{
_ = reader.GetBoolean(0);
}
}
}

I believe this is related to this issue in GitHub. As I understand it, it's a regression introduced in SqlClient 5.0.1. Basically, on this line:
await reader.ReadAsync(ct)
You pass a token, and the reader will internally register a callback function when this token is cancelled. However, this registration is not properly unregistered on all code paths. This in turn results in your SqlConnection instances being reachable from CancellationTokenSource through that callback registration (which references data reader, which references command, which references connection).
This is fixed in 5.1.0, which had a stable release on 2023-01-19.

Related

No exception being thrown when opening MySqlConnection?

I'm just starting out with async and Task's and my code has stopped processing. It happens when I have an incoming network packet and I try and communicate with the database inside the packet handler.
public class ClientConnectedPacket : IClientPacket
{
private readonly EntityFactory _entityFactory;
public ClientConnectedPacket(EntityFactory entityFactory)
{
_entityFactory= entityFactory;
}
public async Task Handle(NetworkClient client, ClientPacketReader reader)
{
client.Entity = await _entityFactory.CreateInstanceAsync( reader.GetValueByKey("unique_device_id"));
// this Console.WriteLine never gets reached
Console.WriteLine($"Client [{reader.GetValueByKey("unique_device_id")}] has connected");
}
}
The Handle method gets called from an async task
if (_packetRepository.TryGetPacketByName(packetName, out var packet))
{
await packet.Handle(this, new ClientPacketReader(packetName, packetData));
}
else
{
Console.WriteLine("Unknown packet: " + packetName);
}
Here is the method which I think is causing the issue
public async Task<Entity> CreateInstanceAsync(string uniqueId)
{
await using (var dbConnection = _databaseProvider.GetConnection())
{
dbConnection.SetQuery("SELECT COUNT(NULL) FROM `entities` WHERE `unique_id` = #uniqueId");
dbConnection.AddParameter("uniqueId", uniqueId);
var row = await dbConnection.ExecuteRowAsync();
if (row != null)
{
return new Entity(uniqueId, false);
}
}
return new Entity(uniqueId,true);
}
DatabaseProvider's GetConnection method:
public DatabaseConnection GetConnection()
{
var connection = new MySqlConnection(_connectionString);
var command = connection.CreateCommand();
return new DatabaseConnection(_logFactory.GetLogger(), connection, command);
}
DatabaseConnection's constructor:
public DatabaseConnection(ILogger logger, MySqlConnection connection, MySqlCommand command)
{
_logger = logger;
_connection = connection;
_command = command;
_connection.Open();
}
When I comment out this line, it reaches the Console.WriteLine
_connection.Open();
I ran a POC project spinning 100 parallel tasks both with MySql.Data 8.0.19 and MySqlConnector 0.63.2 on .NET Core 3.1 console application. I create, open and dispose the connection into the context of every single task. Both providers runs to completion without errors.
The specifics are that MySql.Data queries run synchronously although the library provide async methods signature e.g. ExecuteReaderAsync() or ExecuteScalarAsync(), while MySqlConnector run truly asynchronously.
You may be running into:
a deadlock situation not specifically related to the mysql provider
not properly handling exceptions inside your tasks (you may inspect the task associated aggregate exception and also monitor mysql db logs)
you execution be still blocked (not returning result) when you assume it’s not working, if you running a high number of parallel tasks with MySql.Data as it executes synchronously
Multi-threading with MySQL must use independent connections. Given that, multithreading is not a MySQL question but an issue for the client language, C# in your question.
That is, build your threads without regard to MySQL, then create a connection in each thread that needs to do queries. It will be on your shoulders if you need to pass data between the threads.
I usually find that optimizing queries eliminates the temptation to multi-thread my applications.

TransactionScope breaking SqlConnection pooling?

I have an odd situation with TransactionScope and async/synchronous SQL calls that I'm having difficulty understanding. I hope that someone with a deeper understanding of the ins and outs of these kinds of operations can shed some light on the issue.
The situation:
I have a NUnit testfixture which creates a TransactionScope during [SetUp] and Disposes it at [TearDown] to let each test run on the same data. I have a series of tests which kick off an asynchronous operation on the database and then execute a synchronous operation on the database. The first such test completes successfully. The second such test fails with "There is already an open DataReader associated with this Command which must be closed first.".
If I comment out the TransactionScope entirely, all the tests pass.
I tried various different TransactionScope options, and Complete / Dispose, but the same issue occurs.
I am using the Resharper test runner on an NUnit test, .NET 4.5.1.
I realize the "correct" answer may be "make everything async await". That's not an option for me, unfortunately.
I don't want to enable MARS, as this issue only occurs in tests.
I don't want to use GetAwaiter().GetResult() due to the potential deadlocks.
What it looks like to me is that once a TransactionScope.Dispose/Complete is called, the automatic SQLConnection pooling loses track of which connections have open DataReaders. It hands out the same SqlConnection to two simultaneously running operations, and the second dies.
My primary question is "what is causing this behavior (specifically)?"
My secondary question is "is there anything that can be done to safely resolve the issue?"
The replicating code below prints out the client connection Ids. On my machine, the ClientConnectionId for the ASYNC and SYNC calls in the Second test case are always the same.
Replicating Code:
[TestFixture]
public class DataReaderTests
{
private TransactionScope _scope;
private string _connString = #"my connection string";
[SetUp]
public void Setup()
{
var options = new TransactionOptions()
{
IsolationLevel = IsolationLevel.ReadCommitted,
Timeout = TimeSpan.FromMinutes(1)
};
_scope = new TransactionScope(TransactionScopeOption.RequiresNew, options, TransactionScopeAsyncFlowOption.Enabled);
}
[Test]
[TestCase("First")]
[TestCase("Second")]
public void Test(string name)
{
DoAsyncThing().ConfigureAwait(false);
using (var conn = new SqlConnection(_connString))
{
try
{
conn.Open();
Console.WriteLine("SYNC: " + conn.ClientConnectionId);
using (var cmd = conn.CreateCommand())
{
cmd.CommandText = "SELECT 1";
using (var reader = cmd.ExecuteReader())
{
while (reader.Read())
{
int id = reader.GetInt32(0);
}
}
}
}
catch (TransactionAbortedException tax)
{
Console.WriteLine("ERROR: " + ((SqlException)tax.InnerException.InnerException).ClientConnectionId);
throw;
}
}
}
private async Task DoAsyncThing()
{
using (var connection = new SqlConnection(_connString))
{
await connection.OpenAsync();
Console.WriteLine("ASYNC: " + connection.ClientConnectionId);
using (var cmd = connection.CreateCommand())
{
cmd.CommandText = "WAITFOR DELAY '00:02';";
await cmd.ExecuteNonQueryAsync();
Console.WriteLine("ASYNC COMPLETE");
}
}
}
[TearDown]
public void Teardown()
{
_scope.Dispose();
}
}`
Check out this answer
I think the gist is that you cannot have two active sql commands executing over the same connection at the same time without a special connection string property. When you are operating under the transaction scope, you should find that both SqlConnection objects have the same client ID. However, if you remove the transaction scope they are different, which I believe implies that they are operating on separate connections.
Adding "MultipleActiveResultSets=true" to the connection string fixed the issue for me. Another alternative is to replace
DoAsyncThing().ConfigureAwait(false);
with
DoAsyncThing().ConfigureAwait(false).GetAwaiter().GetResult();
which will terminate the first command before starting the second command.

How to lock a object when using load balancing

Background: I'm writing a function putting long lasting operations in a queue, using C#,
and each operation is kind of divided into 3 steps:
1. database operation (update/delete/add data)
2. long time calculation using web service
3. database operation (save the calculation result of step 2) on the same db table in step 1, and check the consistency of the db table, e.g., the items are the same in step 1 (Pls see below for a more detailed example)
In order to avoid dirty data or corruptions, I use a lock object (a static singleton object) to ensure the 3 steps to be done as a whole transaction. Because when multiple users are calling the function to do operations, they may modify the same db table at different steps during their own operations without this lock, e.g., user2 is deleting item A in his step1, while user1 is checking if A still exists in his step 3. (additional info: Meanwhile I'm using TransactionScope from Entity framework to ensure each database operation as a transaction, but as repeat readable.)
However, I need to put this to a cloud computing platform which uses load balancing mechanism, so actually my lock object won't take effect, because the function will be deployed on different servers.
Question: what can I do to make my lock object working under above circumstance?
This is a tricky problem - you need a distributed lock, or some sort of shared state.
Since you already have the database, you could change your implementation from a "static C# lock" and instead the database to manage your lock for you over the whole "transaction".
You don't say what database you are using, but if it's SQL Server, then you can use an application lock to achieve this. This lets you explicitly "lock" an object, and all other clients will wait until that object is unlocked. Check out:
http://technet.microsoft.com/en-us/library/ms189823.aspx
I've coded up an example implementation below. Start two instances to test it out.
using System;
using System.Data;
using System.Data.SqlClient;
using System.Transactions;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
var locker = new SqlApplicationLock("MyAceApplication",
"Server=xxx;Database=scratch;User Id=xx;Password=xxx;");
Console.WriteLine("Aquiring the lock");
using (locker.TakeLock(TimeSpan.FromMinutes(2)))
{
Console.WriteLine("Lock Aquired, doing work which no one else can do. Press any key to release the lock.");
Console.ReadKey();
}
Console.WriteLine("Lock Released");
}
class SqlApplicationLock : IDisposable
{
private readonly String _uniqueId;
private readonly SqlConnection _sqlConnection;
private Boolean _isLockTaken = false;
public SqlApplicationLock(
String uniqueId,
String connectionString)
{
_uniqueId = uniqueId;
_sqlConnection = new SqlConnection(connectionString);
_sqlConnection.Open();
}
public IDisposable TakeLock(TimeSpan takeLockTimeout)
{
using (TransactionScope transactionScope = new TransactionScope(TransactionScopeOption.Suppress))
{
SqlCommand sqlCommand = new SqlCommand("sp_getapplock", _sqlConnection);
sqlCommand.CommandType = CommandType.StoredProcedure;
sqlCommand.CommandTimeout = (int)takeLockTimeout.TotalSeconds;
sqlCommand.Parameters.AddWithValue("Resource", _uniqueId);
sqlCommand.Parameters.AddWithValue("LockOwner", "Session");
sqlCommand.Parameters.AddWithValue("LockMode", "Exclusive");
sqlCommand.Parameters.AddWithValue("LockTimeout", (Int32)takeLockTimeout.TotalMilliseconds);
SqlParameter returnValue = sqlCommand.Parameters.Add("ReturnValue", SqlDbType.Int);
returnValue.Direction = ParameterDirection.ReturnValue;
sqlCommand.ExecuteNonQuery();
if ((int)returnValue.Value < 0)
{
throw new Exception(String.Format("sp_getapplock failed with errorCode '{0}'",
returnValue.Value));
}
_isLockTaken = true;
transactionScope.Complete();
}
return this;
}
public void ReleaseLock()
{
using (TransactionScope transactionScope = new TransactionScope(TransactionScopeOption.Suppress))
{
SqlCommand sqlCommand = new SqlCommand("sp_releaseapplock", _sqlConnection);
sqlCommand.CommandType = CommandType.StoredProcedure;
sqlCommand.Parameters.AddWithValue("Resource", _uniqueId);
sqlCommand.Parameters.AddWithValue("LockOwner", "Session");
sqlCommand.ExecuteNonQuery();
_isLockTaken = false;
transactionScope.Complete();
}
}
public void Dispose()
{
if (_isLockTaken)
{
ReleaseLock();
}
_sqlConnection.Close();
}
}
}
}

Thread abort leaves zombie transactions and broken SqlConnection

I feel like this behavior should not be happening. Here's the scenario:
Start a long-running sql transaction.
The thread that ran the sql command
gets aborted (not by our code!)
When the thread returns to managed
code, the SqlConnection's state is
"Closed" - but the transaction is
still open on the sql server.
The SQLConnection can be re-opened,
and you can try to call rollback on
the transaction, but it has no
effect (not that I would expect this behavior. The point is there is no way to access the transaction on the db and roll it back.)
The issue is simply that the transaction is not cleaned up properly when the thread aborts. This was a problem with .Net 1.1, 2.0 and 2.0 SP1. We are running .Net 3.5 SP1.
Here is a sample program that illustrates the issue.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Data.SqlClient;
using System.Threading;
namespace ConsoleApplication1
{
class Run
{
static Thread transactionThread;
public class ConnectionHolder : IDisposable
{
public void Dispose()
{
}
public void executeLongTransaction()
{
Console.WriteLine("Starting a long running transaction.");
using (SqlConnection _con = new SqlConnection("Data Source=<YourServer>;Initial Catalog=<YourDB>;Integrated Security=True;Persist Security Info=False;Max Pool Size=200;MultipleActiveResultSets=True;Connect Timeout=30;Application Name=ConsoleApplication1.vshost"))
{
try
{
SqlTransaction trans = null;
trans = _con.BeginTransaction();
SqlCommand cmd = new SqlCommand("update <YourTable> set Name = 'XXX' where ID = #0; waitfor delay '00:00:05'", _con, trans);
cmd.Parameters.Add(new SqlParameter("0", 340));
cmd.ExecuteNonQuery();
cmd.Transaction.Commit();
Console.WriteLine("Finished the long running transaction.");
}
catch (ThreadAbortException tae)
{
Console.WriteLine("Thread - caught ThreadAbortException in executeLongTransaction - resetting.");
Console.WriteLine("Exception message: {0}", tae.Message);
}
}
}
}
static void killTransactionThread()
{
Thread.Sleep(2 * 1000);
// We're not doing this anywhere in our real code. This is for simulation
// purposes only!
transactionThread.Abort();
Console.WriteLine("Killing the transaction thread...");
}
/// <summary>
/// The main entry point for the application.
/// </summary>
[STAThread]
static void Main(string[] args)
{
using (var connectionHolder = new ConnectionHolder())
{
transactionThread = new Thread(connectionHolder.executeLongTransaction);
transactionThread.Start();
new Thread(killTransactionThread).Start();
transactionThread.Join();
Console.WriteLine("The transaction thread has died. Please run 'select * from sysprocesses where open_tran > 0' now while this window remains open. \n\n");
Console.Read();
}
}
}
}
There is a Microsoft Hotfix targeted at .Net2.0 SP1 that was supposed to address this, but we obviously have newer DLL's (.Net 3.5 SP1) that don't match the version numbers listed in this hotfix.
Can anyone explain this behavior, and why the ThreadAbort is still not cleaning up the sql transaction properly? Does .Net 3.5 SP1 not include this hotfix, or is this behavior that is technically correct?
Since you're using SqlConnection with pooling, your code is never in control of closing the connections. The pool is. On the server side, a pending transaction will be rolled back when the connection is truly closed (socket closed), but with pooling the server side never sees a connection close. W/o the connection closing (either by physical disconnect at the socket/pipe/LPC layer or by sp_reset_connection call), the server cannot abort the pending transaction. So it really boils down to the fact that the connection does not get properly release/reset. I don't understand why you're trying to complicate the code with explicit thread abort dismissal and attempt to reopen a closed transaction (that will never work). You should simply wrap the SqlConnection in an using(...) block, the implied finally and connection Dispose will be run even on thread abort.
My recommendation would be to keep things simple, ditch the fancy thread abort handling and replace it with a plain 'using' block (using(connection) {using(transaction) {code; commit () }}.
Of course I assume you do not propagate the transaction context into a different scope in the server (you do not use sp_getbindtoken and friends, and you do not enroll in distributed transactions).
This little program shows that the Thread.Abort properly closes a connection and the transaction is rolled back:
using System;
using System.Data.SqlClient;
using testThreadAbort.Properties;
using System.Threading;
using System.Diagnostics;
namespace testThreadAbort
{
class Program
{
static AutoResetEvent evReady = new AutoResetEvent(false);
static long xactId = 0;
static void ThreadFunc()
{
using (SqlConnection conn = new SqlConnection(Settings.Default.conn))
{
conn.Open();
using (SqlTransaction trn = conn.BeginTransaction())
{
// Retrieve our XACTID
//
SqlCommand cmd = new SqlCommand("select transaction_id from sys.dm_tran_current_transaction", conn, trn);
xactId = (long) cmd.ExecuteScalar();
Console.Out.WriteLine("XactID: {0}", xactId);
cmd = new SqlCommand(#"
insert into test (a) values (1);
waitfor delay '00:01:00'", conn, trn);
// Signal readyness and wait...
//
evReady.Set();
cmd.ExecuteNonQuery();
trn.Commit();
}
}
}
static void Main(string[] args)
{
try
{
using (SqlConnection conn = new SqlConnection(Settings.Default.conn))
{
conn.Open();
SqlCommand cmd = new SqlCommand(#"
if object_id('test') is not null
begin
drop table test;
end
create table test (a int);", conn);
cmd.ExecuteNonQuery();
}
Thread thread = new Thread(new ThreadStart(ThreadFunc));
thread.Start();
evReady.WaitOne();
Thread.Sleep(TimeSpan.FromSeconds(5));
Console.Out.WriteLine("Aborting...");
thread.Abort();
thread.Join();
Console.Out.WriteLine("Aborted");
Debug.Assert(0 != xactId);
using (SqlConnection conn = new SqlConnection(Settings.Default.conn))
{
conn.Open();
// checked if xactId is still active
//
SqlCommand cmd = new SqlCommand("select count(*) from sys.dm_tran_active_transactions where transaction_id = #xactId", conn);
cmd.Parameters.AddWithValue("#xactId", xactId);
object count = cmd.ExecuteScalar();
Console.WriteLine("Active transactions with xactId {0}: {1}", xactId, count);
// Check count of rows in test (would block on row lock)
//
cmd = new SqlCommand("select count(*) from test", conn);
count = cmd.ExecuteScalar();
Console.WriteLine("Count of rows in text: {0}", count);
}
}
catch (Exception e)
{
Console.Error.Write(e);
}
}
}
}
This is a bug in Microsoft's MARS implementation. Disabling MARS in your connection string will make the problem go away.
If you require MARS, and are comfortable making your application dependent on another company's internal implementation, familiarize yourself with http://dotnet.sys-con.com/node/39040, break out .NET Reflector, and look at the connection and pool classes. You have to store a copy of the DbConnectionInternal property before the failure occurs. Later, use reflection to pass the reference to a deallocation method in the internal pooling class. This will stop your connection from lingering for 4:00 - 7:40 minutes.
There are surely other ways to force the connection out of the pool and to be disposed. Short of a hotfix from Microsoft, though, reflection seems to be necessary. The public methods in the ADO.NET API don't seem to help.

C# Console App, slow to exit

I have a very basic C# console app that connects to a db, executes a query, closes the connection, and exits out of the app.
The problem is, the app takes almost 3 seconds to exit.
I have displayed the time at each step to see why it is running slowly and it isn't during any of the processing, just when it exits out of the app.
Does anyone know how to speed this up?
Here is the output:
Opening Connection:94ms
26:OK
Closing Connection:356ms
Closed Connection:357ms
Exiting:358ms
[Delay of about 3 seconds before it exits]
And here is the code:
using System;
using System.Data;
using System.Data.SqlClient;
using System.Diagnostics;
namespace CheckSQL
{
class Program
{
static Stopwatch watch = new Stopwatch();
static void Main(string[] args)
{
if (args.Length == 0) return;
watch.Start();
string connstring = args[0];
string sqlquery = args[1];
ExecuteScalar(connstring, sqlquery);
watch.Stop();
Console.WriteLine(string.Format("Exiting:{0}ms", watch.ElapsedMilliseconds));
}
private static void ExecuteScalar(string connstring, string sqlquery)
{
SqlConnection sqlconn = new SqlConnection(connstring);
SqlCommand sqlcmd = new SqlCommand(sqlquery, sqlconn);
try
{
Console.WriteLine(string.Format("Opening Connection:{0}ms", watch.ElapsedMilliseconds));
sqlconn.Open();
Console.WriteLine(string.Format("{0}:OK", sqlcmd.ExecuteScalar()));
}
catch (Exception ex)
{
Console.WriteLine(string.Format("0:{0}", ex.Message));
}
finally
{
if (sqlconn.State == ConnectionState.Open)
{
Console.WriteLine(string.Format("Closing Connection:{0}ms", watch.ElapsedMilliseconds));
sqlconn.Close();
Console.WriteLine(string.Format("Closed Connection:{0}ms", watch.ElapsedMilliseconds));
}
}
}
}
}
I had a similar problem with a C# Console app, and found that the issue had something to do with the cleanup of the Connection Pool when the app exits. With connections in the pool, I measured a 1.6 second delay in exiting (Timed by an external script calling my EXE). Although I wasn't entirely happy with the solution, I found that issuing the following, before exiting removed the delay:
System.Data.SqlClient.SqlConnection.ClearAllPools();
I would guess that using "Pooling=False", in your connection strings would also do the trick... But you would only do that if you didn't need the benefits of pooling.
Closing a connection (calling sqlconn.Close() ) only means returning it to the ConnectionPool.
So there still is some housekeeping to be done on exit.
3 seconds seems a bit long, but there are several components (CLR, Database) in play here.
I think its impossible to do in correct way. How can you seepd up job which takes some time? The only possible way in this case - to optimize algorythm, but you cant do this. As I understand you should return control immediately after checking some database information. You can workaround this by creating two process system. First process would startup second and the second in turn would check infomation in database and send results to the first process which would communicate with user. So you alway would return control after retrieving results. The second process would take some time to terminate but this fact should not worry you because you would have control by that moment.

Categories

Resources