My problem relates to an aspx.net 4.0 web server blocking under increased load. By blocking
I mean the request is send by the client but the response is returned after ~45 seconds. This is reproducable in development and production environment. This 45 seconds seem to be constant and I measured this both on the client and in the aspx page between calling the Constructor() and void Render(HtmlTextWriter writer).
I use both several SqlDataSources and custom made controls utilising 6 SqlCommand.BeginExecuteReader(...) in total on one page. I can eliminate the problem if I deactivate the controls with the BeginExecuteReader / EndExecuteReader pattern. So I assume that eventually one of the BeginExecute calls is blocked until a thread becomes available in the ThreadPool.
I print debug messages and recognized a pattern where always a bunch of thread exit messages is printed just befor the blocked request is returned:
The thread 'GetMolFileAsync' (0x1ba4) has exited with code 0 (0x0).
The thread 'GetMolFileAsync' (0x27d0) has exited with code 0 (0x0).
The thread '' (0x23c) has exited with code 0 (0x0).
The thread 'GetCompoundDepositionInfo' (0x1e88) has exited with code 0 (0x0).
The thread 'GetMolFileAsync' (0x2758) has exited with code 0 (0x0).
0x43 27/07/2012 15:09:42 45 ==> blocked thread took 45 seconds
0x5F 27/07/2012 15:10:27 0 ==> normal behaviour, processed in some miliseconds
...
This is the method to start a request to the data base
public static IAsyncResult GetCompoundDepositionInfoAsync(object sender, EventArgs e, AsyncCallback callback, object state)
{
GetCompoundVersionInfoAsyncParameters parameters = (GetCompoundVersionInfoAsyncParameters)state;
IAsyncResult res = null;
parameters.cmd = new System.Data.SqlClient.SqlCommand("www.GetCompoundDepositionInfo", new System.Data.SqlClient.SqlConnection(parameters.connectionstring));
parameters.cmd.CommandType = System.Data.CommandType.StoredProcedure;
parameters.cmd.Parameters.AddWithValue("#CompoundID", parameters.CompoundID);
try
{
parameters.cmd.Connection.Open();
res = parameters.cmd.BeginExecuteReader(callback, parameters, System.Data.CommandBehavior.CloseConnection);
}
catch (Exception ex)
{
if (parameters.cmd.Connection.State == System.Data.ConnectionState.Open)
{
parameters.cmd.Connection.Close();
}
throw new Exception("Exception in calling GetCompoundDepositionInfoAsync()", ex);
}
return res;
}
this is the callback function
public void GetCompoundDepositionInfoCallback(IAsyncResult result)
{
gmdTools.GmdCompound.GetCompoundVersionInfoAsyncParameters param = (gmdTools.GmdCompound.GetCompoundVersionInfoAsyncParameters)result.AsyncState;
System.Threading.Thread.CurrentThread.Name = "GetCompoundDepositionInfo";
using(System.Data.SqlClient.SqlCommand command = param.cmd)
using(System.Data.SqlClient.SqlDataReader reader = command.EndExecuteReader(result))
{
try
{
if (reader.Read())
{
lblDeposited.Text = string.Concat("at ", reader.GetDateTime(0).ToShortDateString(), " by ", reader.GetString(1));
}
}
finally
{
if (reader != null)
{
reader.Close();
command.Connection.Close();
}
}
}
}
and this is the code to glue them together...
Page.RegisterAsyncTask(new PageAsyncTask(
new BeginEventHandler(gmdTools.GmdCompound.GetCompoundLastChangeInfoAsync)
, new EndEventHandler(GetCompoundLastChangeInfoCallback)
, new EndEventHandler(GetCompoundInfoAsyncTimeout)
, new gmdTools.GmdCompound.GetCompoundVersionInfoAsyncParameters()
{
connectionstring = Properties.Settings.Default.GmdConnectionString,
CompoundID = CompoundId,
}, true
));
As I already spent hours looking at this code I would be appreciate any feedback.
UPDATE
This 45 seconds are reasoned by the default Page.AsyncTimeout and can be changed to 10 seconds using the Async="true" AsyncTimeout="10" statements. Although I improved the overall performance of the site very much by adding appropiate indexes, very ocasionally the client has to wait this amount of time, before the server sends the response. In this cases no AsyncTimeout handler is called. I assume that the page registers all async operations but eventually does not recognise that some of the async operations completed successfully and hence waits the AsyncTimeout seconds before rendering the page. Any comments on that?
Its probably the database, given the choice or rows the query returns, is it choosing more or less than one in a thousand? MS SQL will change the way it operates when the choice goes through 1 in 1000 rows returned. If you run the query using SQL profiler, do you get a table scan? If you run the built in sp for determining missing indexes, does it return requests for indexes on these tables? Are your statistics up to date? On clue to this is that a restored backup runs the query quickly, because when you restore a backup, the statisitcs are updated. Is there a clustered index on (all/each) of your tables?
Also this answer might be relevant Entity Framework MVC Slow Page Loads
Are you using the async=true property in the connectionstring. This is required for real asynchrnous operations using the SqlClient.
If possible can you try this on .Net 4.5 with the Task async feature with code like below.
public async Task GetCompoundDepositionInfoAsync(CancellationToken cancellationToken)
{
parameters.cmd = new System.Data.SqlClient.SqlCommand("www.GetCompoundDepositionInfo", new System.Data.SqlClient.SqlConnection(parameters.connectionstring));
parameters.cmd.CommandType = System.Data.CommandType.StoredProcedure;
parameters.cmd.Parameters.AddWithValue("#CompoundID", parameters.CompoundID);
using (var connection = new SqlConnection(parameters.connectionstring))
using (var command = new SqlCommand(query, connection))
{
await connection.OpenAsync(cancellationToken);
using (var reader = await command.ExecuteReaderAsync(cancellationToken))
{
if (await reader.ReadAsync(cancellationToken))
{
lblDeposited.Text = string.Concat("at ", reader.GetDateTime(0).ToShortDateString(), " by ", reader.GetString(1));
}
}
}
}
and in page_load()
RegisterAsyncTask(new PageAsyncTask(GetCompoundDepositionInfoAsync));
Related
I am faced with a peculiar async problem which I can reproduce easily but cannot understand.
My Current Setup
I have a WCF Service which exposes two API's - API1 and API2. Both the service contracts are synchronous. API1, looks up a dictionary in memory, then creates a task using Task.Factory.StartNew to create a new task which fetches data from a SQL server, compares it with the data from the dictionary and writes some logs. In case the SQl Server has connectivity issues, this re-tries SqlConnection.OpenAsync 3 more times. Note that the API call itself returns as soon as it has the data from the dictionary (does not wait for SQl operation to complete)
API2 is much simpler, it just calls a stored procedure on SQL server, gets the data and returns.
The code to open connection is as follows:
public static int OpenSqlConn(SqlConnection connection)
{
return OpenSqlConn(connection).Result;
}
public async static Task<int> OpenSqlConnAsync(SqlConnection connection)
{
return await OpenConnAsync(connection);
}
private static async Task<int> OpenConnAsync(SqlConnection connection)
{
int retryCounter = 0;
TimeSpan? waitTime = null;
while (true)
{
if (waitTime.HasValue)
{
await Task.Delay(waitTime.Value).ConfigureAwait(false);
}
try
{
startTime = DateTime.UtcNow;
await connection.OpenAsync().ConfigureAwait(false);
break;
}
catch (Exception e)
{
if (retryCounter >= 3)
{
SafeCloseConnection(connection);
return retryCounter;
}
retryCounter++;
waitTime = TimeSpan.FromSeconds(6);
}
}
return retryCounter;
}
The API1 code looks like below:
public API1Response API1 (API1Request request)
{
// look up in memory dictionary for the request
API1Response response = getDataFromDictionary(request);
// create a task to get some data from DB
Action action = () =>
{
GetDataFromDb(request);
}
Task.Factory.StartNew(action).ConfigureAwait(false);
// this is called immediately even if DB is not available and above task is retrying.
return API1Response;
}
public void GetDataFromDb(API1Request request)
{
using (var connection = new SqlConnection(...))
{
OpenSqlConn(connection);
/// hangs for long even if db is available
ReadDataFromDb(connection);
}
}
public API2Response API2(API2REquest request)
{
return GetDataFromDbForAPI2(request)
}
public API2Response GetDataFromDbForAPI2(API2Request request)
{
using (var connection = new SqlConnection(...))
{
OpenSqlConn(connection); /// hangs for long even if db is available
ReadDataFromDb(connection);
}
}
The Problem
The service runs into the following problem when the SQL Server is unavailable even for short periods of time, and some client makes just 100 calls to API1:
When my SQL server has connectivity issues, and I get around 100 calls of API1, even though API1 returns to the caller, it has created 100 tasks that will try to open a connection to the bad DB. Each of those tasks hangs in a retry look for some time (which is expected). In my experiments, I can simulate a DB unavailability by using a bad connection string for API1.
Now let's say the DB is back up again and a call to API2 is made to the service. What I find is that when API2 call reaches the OpenAsync portion above, it hangs. Just hangs :(
Some observations
1. When I look at the 'Parallel Stacks' from Visual Studio, I find that there are 100 threads with the API1 stack doing the following stack :
ManualResetEvenSlim.Wait()
Task.SpinThenBlockingWait
Task.InternalWait();
Task<>.GetREsultCore
OpenConn()
There is 1 thread with the API2 stack, which again is in a similar stack as above.
However, if I replace SqlConnection.OpenAsync with SqlConnection.Open(), API2 call returns immediately.
Need Help
What I would like to understand is why does the API2, which can open a DB connection (because DB is available at that time), also hang on OpenAsync. Is there any obvious synchronization issue that I am seeing? When i change SqlConnection.OpenAsync() to SqlConnection.Open() why does the behavior change?
I have an application leveraging Entity Framework 6. For queries that are relatively fast (e.g. taking less than a minute to execute) it is working fine.
But I have a stored procedure that queries a table which doesn't have appropriate indices and so the time taken to execute the query has been clocked to take anywhere between 55 and 63 seconds. Obviously, indexing the table would bring that time down but unfortunately I don't have the luxury of controlling the situation and have to deal the hand I was dealt.
What I am seeing is when EF6 is used to call the stored procedure it continues through the code in less than 3 seconds total time and returns a result of 0 records; when I know there are 6 records the SPROC will return when executed directly in the database.
There are no errors whatsoever, so the code is executing fine.
Performing a test; I constructed some code using the SqlClient library and made the same call and it returned 6 records. Also noted that unlike the EF6 execution, that it actually took a few more seconds as if it were actually waiting to receive a response.
Setting the CommandTimeout on the context doesn't appear to make any difference either and I suspect possibly because it isn't timing out but rather not waiting for the result before it continues through the code?
I don't recall seeing this behavior in prior versions but then again maybe the time required to execute my prior queries were within the expected range of EF???
Is there a way to set the actual time that EF will wait for a response before continuing through the code? Or is there a way that I can enforce an asynchronous operation since it seems to be a default synchronous task by default?? Or is there a potential flaw in the code?
Sample of Code exhibiting (synchronous) execution: No errors but no records returned
public static List<Orphan> GetOrphanItems()
{
try
{
using (var ctx = new DBEntities(_defaultConnection))
{
var orphanage = from orp in ctx.GetQueueOrphans(null)
select orp;
var orphans = orphanage.Select(o => new Orphan
{
ServiceQueueId = o.ServiceQueueID,
QueueStatus = o.QueueStatus,
OrphanCode = o.OrphanCode,
Status = o.Status,
EmailAddress = o.EmailAddress,
TemplateId = o.TemplateId
}).ToList();
return orphans;
}
}
catch(Exception exc)
{
// Handle the error
}
}
Sample Code using SqlClient Library (asynchronous) takes slightly longer to execute but returns 6 records
public static List<Orphan> GetOrphanItems()
{
long ServiceQueueId = 0;
bool QueueStatus;
var OrphanCode = String.Empty;
DateTime Status;
var EmailAddress = String.Empty;
int TemplateId = 0;
var orphans = new List<Orphan> ();
SqlConnection conn = new SqlConnection(_defaultConnection);
try
{
var cmdText = "EXEC dbo.GetQueueOrphans";
SqlCommand cmd = new SqlCommand(cmdText, conn);
conn.Open();
SqlDataReader reader;
reader = cmd.ExecuteReader();
while(reader.Read())
{
long.TryParse(reader["ServiceQueueId"].ToString(), out ServiceQueueId);
bool.TryParse(reader["QueueStatus"].ToString(), out QueueStatus);
OrphanCode = reader["OrphanCode"].ToString();
DateTime.TryParse(reader["Status"].ToString(), out Status);
EmailAddress = reader["EmailAddress"].ToString();
int.TryParse(reader["TemplateId"].ToString(), out TemplateId);
orphans.Add(new Orphan { ServiceQueueId = ServiceQueueId, QueueStatus=QueueStatus, OrphanCode=OrphanCode,
EmailAddress=EmailAddress, TemplateId=TemplateId});
}
conn.Close();
catch(Exception exc)
{
// Handle the error
}
finally
{
conn.Close();
}
}
Check the type of executing method.
private async void MyMethod()
{
db.executeProdecudeAsync();
}
Forgetting to await task in async void method can cause described behavior without any InteliSense warning.
Fix:
private async Task MyMethod()
{
await db.executeProdecudeAsync();
}
Or just use db.executeProdecudeAsync().Wait() if you want to run in synchronous mode.
I am new to C# programming.
I am trying to get the number of updates fror a list of servers using background worker. Result for every server is shown in a listview at the report progress method.
I am able to successfully get results using foreach loop, but while trying to get the same results using parallel foreach, all the columns and rows of the listview are mixed up.
for example:
output of foreach loop:
Server Name Status Updates Available
server1 Login to server failed! 0
server2 Updates are available 3
server3 Updates are available 3
server4 Up to Date 0
and so on..
output of parallel foreach:
server1 Updates are available 1
server1 Login to server failed! 1
server2 Login to server failed! 0
server3 Login to server failed! 0
server4 Login to server failed! 0
server4 Updates are available 3
and so on..
I have tried locking parts of the code and have also tried using concurrent bag but was not quite able to resolve the issue. Below is the parallelforeach code. I am doing someting wrong? Any suggestions would be of great help.
Parallel.ForEach(namelist, /*new ParallelOptions { MaxDegreeOfParallelism = 4 }, */line =>
//foreach (string line in namelist)
{
if (worker.CancellationPending)
{
e.Cancel = true;
worker.ReportProgress(SysCount, obj);
}
else
{
this.SystemName = line;//file.ReadLine();
Status.sVariables result = new Status.sVariables();
result = OneSystem(this.SystemName);
switch (result.BGWResult)
{
case -1:
this.StatusString = "Login to server failed!";
break;
//other status are assigned here;
}
SysCount++;
bag.Add(this);
}
Status returnobj;
bag.TryTake(out returnobj);
worker.ReportProgress(SysCount, returnobj);
Thread.Sleep(200);
});
ReportProgress Method:
private void backgroundWorker1_ProgressChanged(object sender, ProgressChangedEventArgs e)
{
if (!backgroundWorker1.CancellationPending)
{
Status result = (Status)e.UserState;
Complete_label.Visible = true;
if (listView1.InvokeRequired)
listView1.Invoke(new MethodInvoker(delegate
{
listView1.Items.Add("");
listView1.Items[result.SysCount - 1].SubItems.Add(result.SystemName);
listView1.Items[result.SysCount - 1].SubItems.Add(result.StatusString);
listView1.Items[result.SysCount - 1].SubItems.Add(result.AvailableUpdatesCount.ToString());
}));
else
{
try
{
listView1.Items.Add("");
listView1.Items[result.SysCount - 1].SubItems.Add(result.SystemName);
listView1.Items[result.SysCount - 1].SubItems.Add(result.StatusString);
listView1.Items[result.SysCount - 1].SubItems.Add(result.AvailableUpdatesCount.ToString());
}
catch (Exception ex)
{}
//other stuff
}
}
The real problem is that the ListView updating code uses the wrong index to update items. It assumes the Status.SysCount property contains the correct index. This may be true if execution happens in sequence, but fails if execution runs in parallel - different threads can finish at different speeds and report progress out-of-order.
The actual problem can be fixed simply by using the ListViewItem object returned by ListViewItemCollection.Add
private void backgroundWorker1_ProgressChanged(object sender, ProgressChangedEventArgs e)
{
if (!backgroundWorker1.CancellationPending)
{
Status result = (Status)e.UserState;
Complete_label.Visible = true;
var newItem=listView1.Items.Add("");
newItem.SubItems.Add(result.SystemName);
newItem.SubItems.Add(result.StatusString);
newItem.SubItems.Add(result.AvailableUpdatesCount.ToString());
//other stuff
}
}
The code has more serious problems though - the State class tries to process data in parallel, storing the data in its own properties, then sending itself for reporting. Obviously, the data that gets displayed will always be changing.
A better option is either to create a new State instance inside the loop or, better yet, create a class only for reporting:
class StatusProgress
{
public string SystemName{get;set;}
public string StatusString{get;set;}
public int AvailableUpdatesCount {get;set;}
}
....
int sysCount=0;
Parallel.ForEach(namelist, line =>
{
var progress=new StatusProgress();
progress.SystemName = line;//file.ReadLine();
Status.sVariables result = new Status.sVariables();
result = OneSystem(line);
switch (result.BGWResult)
{
case -1:
progress.StatusString = "Login to server failed!";
break;
//other status are assigned here;
}
var count=Interlocked.Increment(ref sysCount);
}
worker.ReportProgress(count, progress);
});
Notice that instead of SysCount++ is use Interlocked.Increment to increase the value atomically and get a copy of the incremented value. If I didn't do that, multiple threads could modify SysCount before I had a chance to report progress.
The progress reporting code would change to use StateProgress
StatusProgress result = (StatusProgress)e.UserState;
Finally, the BackgroundWorker is obsolete as the Task Parallel Library offers everything the BGW did and more, in a far more lightweight manner. For example, you can cancel the parallel loop by using a CancellationToken and report progress in a type-safe manner using the Progress class.
Most asynchronous methods in .NET recognize CancellationToken and Progress which means you can report progress and cancel asynchronous tasks easily as shown here.
The code could be rewritten like this:
On a UI form:
private void ReportServerProgress(StatusProgress result)
{
Complete_label.Visible = true;
var newItem=listView1.Items.Add("");
newItem.SubItems.Add(result.SystemName);
newItem.SubItems.Add(result.StatusString);
newItem.SubItems.Add(result.AvailableUpdatesCount.ToString());
//other stuff
}
CancellationTokenSource _cts;
Progress<StatusProgress> _progress;
public void StartProcessiong()
{
_cts=new CancellationTokenSource();
_progress=new Progress<StatusProgress(progress=>ReportServerProgress(progress);
StartProcessing(/*input*/,_cts.Token,_progress);
}
public void CancelLoop()
{
if (_cts!=null)
_cts.Cancel();
}
The processing code can be on the same form or any other class. In fact, it's better to separate the UI from the processing code, especially when you have non-trivial processing, eg calling each server to determine its status
public void StartProcessing(/*input parameters*/,
CancellationTokenSource token,
IProgress<StatusProgress> progress)
{
.....
var po=new ParallelOptions();
po.CancellationToken=token;
Parallel.ForEach(namelist, po,line =>
{
var status=new StatusProgress();
status.SystemName = line;//file.ReadLine();
Status.sVariables result = new Status.sVariables();
result = OneSystem(line);
switch (result.BGWResult)
{
case -1:
progress.StatusString = "Login to server failed!";
break;
//other status are assigned here;
}
progress.Report(status);
}
}
Many asynchronous .NET methods accept a cancellation token, so you can pass it eg to a Web Service call and ensure both the loop and any outstanding long calls are cancelled.
Your results are all mixed up because you are using a parallel operation to write to global state, eg SystemName and StatusString, thus the contents of those global variables will end up all mixed up when you try to read and print their values.
You could introduce a lock, but this would completely defeat the point of the Parallel.ForEach. So either abandon he use of the Parallel.ForEach (which seems to serve no useful purpose in this instance) or you need to gather data and ensure it's sent to the reporter in a thread-safe fashion.
To further explain, let's examine the code:
this.SystemName = line; // <- the worker has now written to this, which is global to all workers
...
result = OneSystem(this.SystemName); // <- another worker may have overwritten SystemName at this point
...
this.StatusString = "Login to server failed!"; // <- again writing to shared variable
...
bag.Add(this); // <- now trying to "thread protect" already corrupted data
So if you must run the loop in parallel, each worker must update only its own isolated data then push that off to the GUI marshalling report method.
Hi all I just had a quick question for you all. For whatever reason, a piece of code periodically does not return and I am not 100% sure yet. To combat this for now, I want to know, using the Close() method below, is there a way to put a timeout on it? So, if it does not finish within 1 minute or so, it just moves on?
Any advice would be appreciated. Thank you,
If it makes any difference, the original writer who wrote this noted that he believed it hangs on the close() and noted "Maybe Too fast?" (The connection is an oledb connection to Netezza, the whole applications is heavily multi-threaded).
Anyways, for now, I just want to be able to get the application to at least finish instead of hanging on that exception catch.
below is the Close(); which I believe is not returning.
catch(Exception){
Close(); //-- if we have an error, close everything down and then return the error
throw;}
public void Close() {
if (null != Command) {
Command.Cancel();
Command.Dispose();
Command = null;
}
if (null != Connection) {
if (Connection.State != System.Data.ConnectionState.Closed)
Connection.Close();
Connection.Dispose();
Connection = null;
}
}
Rather than timeout on a Method do you really mean timeout on a Command?
Based on that Close() you are sharing Command and Connection.
That is not a good design for a heavily multi-threaded application.
That is not a good design from even a lightly multi-threaded application.
DbCommand has a timeout property
Using statement will perform cleanup (including close)
string connectionString = "";
// Wait for 5 second delay in the command
string queryString = "waitfor delay '00:00:05'";
using (OleDbConnection connection = new OleDbConnection(connectionString )) {
connection.Open();
SqlCommand command = new connection.CreateCommand();
// Setting command timeout to 1 second
command.CommandText = queryString;
command.CommandTimeout = 1;
try {
command.ExecuteNonQuery();
}
catch (DbException e) {
Console.WriteLine("Got expected DbException due to command timeout ");
Console.WriteLine(e);
}
}
Assuming you're using .NET 4.0 and above, you can use the TPL to do so using the System.Threading.Tasks.Task object. You create a Task to run a method asynchronously, then Wait on that task for your timeout duration, and if it expires - let the main thread continue.
Task timeoutTask = new Task(Close); // create a Task around the Close method.
timeoutTask.Start(); // run asynchronously.
bool completedSuccessfully = timeoutTask.Wait(TimeSpan.FromMinutes(1));
if (completedSuccessfully)
{
// Yay!
}
else
{
logger.Write("Close command did not return in time. Continuing");
}
In this example, the Close method will keep on running in the background, but your main thread can continue.
I have a very basic C# console app that connects to a db, executes a query, closes the connection, and exits out of the app.
The problem is, the app takes almost 3 seconds to exit.
I have displayed the time at each step to see why it is running slowly and it isn't during any of the processing, just when it exits out of the app.
Does anyone know how to speed this up?
Here is the output:
Opening Connection:94ms
26:OK
Closing Connection:356ms
Closed Connection:357ms
Exiting:358ms
[Delay of about 3 seconds before it exits]
And here is the code:
using System;
using System.Data;
using System.Data.SqlClient;
using System.Diagnostics;
namespace CheckSQL
{
class Program
{
static Stopwatch watch = new Stopwatch();
static void Main(string[] args)
{
if (args.Length == 0) return;
watch.Start();
string connstring = args[0];
string sqlquery = args[1];
ExecuteScalar(connstring, sqlquery);
watch.Stop();
Console.WriteLine(string.Format("Exiting:{0}ms", watch.ElapsedMilliseconds));
}
private static void ExecuteScalar(string connstring, string sqlquery)
{
SqlConnection sqlconn = new SqlConnection(connstring);
SqlCommand sqlcmd = new SqlCommand(sqlquery, sqlconn);
try
{
Console.WriteLine(string.Format("Opening Connection:{0}ms", watch.ElapsedMilliseconds));
sqlconn.Open();
Console.WriteLine(string.Format("{0}:OK", sqlcmd.ExecuteScalar()));
}
catch (Exception ex)
{
Console.WriteLine(string.Format("0:{0}", ex.Message));
}
finally
{
if (sqlconn.State == ConnectionState.Open)
{
Console.WriteLine(string.Format("Closing Connection:{0}ms", watch.ElapsedMilliseconds));
sqlconn.Close();
Console.WriteLine(string.Format("Closed Connection:{0}ms", watch.ElapsedMilliseconds));
}
}
}
}
}
I had a similar problem with a C# Console app, and found that the issue had something to do with the cleanup of the Connection Pool when the app exits. With connections in the pool, I measured a 1.6 second delay in exiting (Timed by an external script calling my EXE). Although I wasn't entirely happy with the solution, I found that issuing the following, before exiting removed the delay:
System.Data.SqlClient.SqlConnection.ClearAllPools();
I would guess that using "Pooling=False", in your connection strings would also do the trick... But you would only do that if you didn't need the benefits of pooling.
Closing a connection (calling sqlconn.Close() ) only means returning it to the ConnectionPool.
So there still is some housekeeping to be done on exit.
3 seconds seems a bit long, but there are several components (CLR, Database) in play here.
I think its impossible to do in correct way. How can you seepd up job which takes some time? The only possible way in this case - to optimize algorythm, but you cant do this. As I understand you should return control immediately after checking some database information. You can workaround this by creating two process system. First process would startup second and the second in turn would check infomation in database and send results to the first process which would communicate with user. So you alway would return control after retrieving results. The second process would take some time to terminate but this fact should not worry you because you would have control by that moment.