How to find what is using the connections in my connection pool - c#

I have a problem in the code that I have written using .NET.
The problem is that somewhere I have some dodgy database code that means that after some time I get the following error:
Timeout expired. The timeout period
elapsed prior to obtaining a
connection from the pool. This may
have occurred because all pooled
connections were in use and max pool
size was reached.
I know that it is because somewhere I haven't disposed of one of my datareaders or something similar which means it still has the connection open so its not being returned to the pool. I'm having a bit of problem finding where this is happening in my code though.
So my question:
Is there any way to query the connection pool to find out what its in use connections are doing. I'm just looking for a way to find what query is being run to allow me to find the offending piece of code.
For what its worth I don't have permissions to run an activity monitor on the database in question to find out that way.

Is there any way to query the
connection pool to find out what its
in use connections are doing.
No. Not really. The connection pool is something that your application maintains (actually a List<DbConnectionInternal> ) If you really wanted to you could get to the connections in the pool via reflection or if you are debugging, via a local or watch window (see below), but you can't get to what's happening on that connection from there or which object should have called Connection.Close (or Dispose). So that doesn't help
If you're lucky you can execute sp_who or sp_who2 at the moment you get the timeout when you've run out of pooled connections when but its highly likely that most of the results are going to look like this .
SPID Staus Login Hostname Blkby DBname Command ....
---- ------- ----- --------- ----- ------ ----------------
79 sleeping uName WebServer . YourDb AWAITING COMMAND .....
80 sleeping uName WebServer . YourDb AWAITING COMMAND .....
81 sleeping uName WebServer . YourDb AWAITING COMMAND .....
82 sleeping uName WebServer . YourDb AWAITING COMMAND .....
Which means that yes indeed your application has opened a lot of connection and didn't close them and isn't even doing anything with them.
The best way to combat this is to profile your application use the ADO.NET Performance Counters and keep a close eye on NumberOfReclaimedConnections and also do a thorough code review.
If you're really desperate you can Clear the pool when you encounter this problem.
using (SqlConnection cnn = new SqlConnection("YourCnnString"))
{
try
{
cnn.Open();
}
catch (InvalidOperationException)
{
SqlConnection.ClearPool(cnn);
}
cnn.Open();
}
I do however caution you against this because it has the potential of choking your DB server because it allows your application to open as many connections as the server will allow before it just runs out of resources.

Have you tried loading up SSMS and running sp_who2 on the database in question?
USE [SomeDatabase]
EXEC sp_who2
That should show you what's happening at a moment in time.

Related

C# SqlConnections using up entire connection pool

I've written a service that occasionally has to poll a database very often. Usually I'd create a new SqlConnection with the SqlDataAdapter and fire away like this:
var table = new DataTable();
using(var connection = new SqlConnection(connectionString))
{
connection.Open();
using(var adapter = new SqlDataAdapter(selectStatement, connection))
{
adapter.Fill(table);
}
}
However in a heavy load situation (which occurs maybe once a week), the service might actually use up the entire connection pool and the service records the following exception.
System.InvalidOperationException: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
Multiple threads in the service have to access the SQL server for various queries and I'd like as much of them to run in parallel as possible (and that obviously works too well sometimes).
I thought about several possible solutions:
I thought about increasing the connection pool size, however that might just delay the problem.
Then I thought about using a single connection for the service and keep that open for the remainder of the service running, which might a simple option, however it will keep the connection open even if there is no workload to be done and would have to handle connection resets by the server etc. which I do not know the effect of.
Lastly I thought about implementing my own kind of pool that manages the number of concurrent connections and keeps the threads on hold until there is a free slot.
What would be the recommended procedure or is there a best practice way of handling this?
Well the solution in the end was not exactly ideal (fixing the issue on the SQL Server side) so I ended up checking the number of concurrent connections in the job queuing system.
The service will now not create another thread for document generation unless it can guarantee that the connection pool is actually available. The bad bottleneck on the SQL server is still in place, however the service now no longer generates exceptions.
The downside of course is, that the queue gets longer while there is some blocking query executing on the SQL Server, which might delay document generation for a minute or two. So it isn't an ideal solution but a workable one, since the delay isn't critical as the documents aren't needed directly but stored for archival purpose.
The better solution would have been to fix it SQL Server side.

Connection Pooling clear time... I don't really know what to call it :)

I have an client application that connects to the DB via the using clause.
using (SqlConnection sqlConn = new SqlConnection(ConnectionString))
{
sqlConn.Open();
//SQLcommand codes
}
I know this will ensure the sqlConnn.Close() and sqlConn.Dispose() to be called. However, for each client after running this code I still see some of the SPID on SQLServer in sleeping mode such as:
60 0 sleeping sa AVENGER 0 Xmark AWAITING COMMAND 0
61 0 sleeping sa AVENGER 0 Xmark AWAITING COMMAND 0
62 0 sleeping sa AVENGER 0 Xmark AWAITING COMMAND 0
I know this is because I use connection pooling in my connection, that is why they are in sleep mode and to reuse again in further command. I see these processes get flushed out over time (after like 10 minutes or so) if I don't do anything with them.
My question is: Is there a setting either from C# or in SQLServer2008 that will reduce this time to like 2 minutes or so?
The problem I face is my max pooling connection limit reaches quickly if I have many clients connect to the Database around a short period of time. I realize I can fix it by increase my connection pooling, however I think that is like upgrade from a smaller to a bigger bowl to contain water from a leaked roof.
I searched on MSDN and came across the ClearPool() that will let me implicitly remove the SPID from the pool but I think that defeats the purpose of connection pooling and it is not really clean.
any help is greatly appreciated.
As you know the Sql server may not close the connection even when you call sqlconn.Close(), and will use the same connection for any other client connecting with the same connection string.
Ideally, Instead of looking to decrease this time, you should use a data access layer class that must act as helper class to create and manage connections.
Using such approach will avoid pooling issues and I have seen this approach used in good application architectures.
You should modify your code to this approach instead.
You are looking for the 'Idle Time-out' setting. It is configured for the application pool in IIS. Refer to this link

SQL Exception: "Impersonate Session Security Context" cannot be called in this batch because a simultaneous batch has called it

When opening a connection to SQL Server 2005 from our web app, we occasionally see this error:
"Impersonate Session Security Context" cannot be called in this batch because a simultaneous batch has called it.
We use MARS and connection pooling.
The exception originates from the following piece of code:
protected SqlConnection Open()
{
SqlConnection connection = new SqlConnection();
connection.ConnectionString = m_ConnectionString;
if (connection != null)
{
try
{
connection.Open();
if (m_ExecuteAsUserName != null)
{
string sql = Format("EXECUTE AS LOGIN = {0};", m_ExecuteAsUserName);
ExecuteCommand(connection, sql);
}
}
catch (Exception exception)
{
connection.Close();
connection = null;
}
}
return connection;
}
I found an MS Connect article which suggests that the error is caused when a previous command has not yet terminated before the EXECUTE AS LOGIN command is sent. Yet how can this be if the connection has only just been opened?
Could this be something to do with connection pooling interacting strangely with MARS?
UPDATE: For the short-term we have implemented a workaround by clearing out the connection pool whenever this happens, to get rid of the bad connection, as it otherwise keeps getting handed back to various users. (This now happens a 5-10 times a day with only a small number of simultaneous users, so it is fairly annoying.) But if anyone has any further ideas, we are still looking out for a real solution...
I would say it's MARS rather then pooling
From "Using Multiple Active Result Sets (MARS)"
Applications can have multiple default
result sets open and can interleave
reading from them.
Applications can
execute other statements (for example,
INSERT, UPDATE, DELETE, and stored
procedure calls) while default result
sets are open.
Connection pooling in it's basic form means the connection open/close overhead is minimised, but any connection (until MARS) has one thing going on at any one time. Pooling has been around for some time and just works out of the box.
MARS (I've not used it BTW) introduces overlapping "stuff" going on for any single connection. So it's probably MARS rather than connection pooling is the bigger culprit of the 2.
From "Extending Database Impersonation by Using EXECUTE AS"
When impersonating a principal by
using the EXECUTE AS LOGIN statement,
or within a server-scoped module by
using the EXECUTE AS clause, the scope
of the impersonation is server-wide.
This may explain why MARS is causing it: the same principal in 2 session both running EXECUTE AS.
There may be something in that article of use, or try this:
IF ORIGINAL_LOGIN() = SUSER_SNAME() EXECUTE AS LOGIN = {0};
On reflection and after reading for this answer, I've not convinced that trying to change execution context for each session (MARS) in one connections is a good idea...
Don't blame connection pooling - MARS is quite notorious for wreaking a havoc. It's not entirely it's blame but it's kind of half and half. The key thing to remember is that MARS is designed, and only works with "normal" DB use (meaning, regular CRUD stuff no admin batches). Any commands that have a wide effect on DB engine can trip MARS even if it's just one connection and single threaded (like running a setup batch to create tables or a nested transaction).
Having said that, one can easily just blame MARS, but it works perfecly fine for normal CRUD scenarios which are like 99% (and things with low efficiencey like ORM-s and LINQ depend on it for life). Meaning that it's important for people to learn that if they want to hack SQL through a connection they can't use MARS. For example I had a setup code that was creating whole DB from scratch, beceuse it's very convenient for deployment, but it was sharing connection sting with web service it was deploying - oops :-) Took me a few days of digging to learn my lesson. So I just maintain the separation of concerns (which is always good) and problems went away.
Have you tried to use a revert at the end of your sql statement?
http://msdn.microsoft.com/en-us/library/ms178632.aspx
I always do this to just make sure the current context is back to normal.

ASP.NET SqlConnection Timeout issue

I have run into a frustrating issue which I originally thought was a connection leak but that does not seem to be the case. The secnario is this: the data access for this application is using the Enterprise Libraries (v4) from Microsoft. All data access calls are wrapped in using statements such as
using (DbCommand dbCommand = db.GetStoredProcCommand("sproc"))
{
db.AddInParameter(dbCommand, "MaxReturn", DbType.Int32, MaxReturn);
...more code
}
Now the index of this application makes 8 calls to the database to load everything and I can bring the application to its knees by refreshing the index about 15 times. It seems that when the the database reaches 113 connections is when I recieve this error. Here is what makes this weird:
I have run similar code with the entlib on high traffic sites and have NEVER had this problem ever.
If I kill all the connections to the database and get the production application back up and running everytime I refresh the application I can run this SQL
SELECT DB_NAME(dbid) as 'Database Name',
COUNT(dbid) as 'Total Connections'
FROM sys.sysprocesses WITH (nolock)
WHERE dbid > 0
GROUP BY dbid
I can see the number of connections actively increasing with each page refresh. Running the same code on my local box with the same connection string does not cause this problem. Further if the production website is down I can fire up the site via Visual Studio and run it fine and the only difference between the two is that the production site has Windows authentication turned on and my local copy doesn't. Turning windows authentication off seems to have no effect on the server.
I have absolutely no clue what is causing this or why the connections are not being disposed of in SQL Server. The EntLib objects do no explose .Close() methods for anything so I can't explictily close the object.
Any thoughts?
Thanks!
Edit
Wow I just noticed that I never actually posted the error message. Oy. The actual connection error is: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
Check that the stored procedure you are executing is not running into a row or table lock. Also if you can possibly try to deploy in another server and check if the application would crawl again.
Also try to increase the maximum allowed connections for your SQL server.
think the “Timeout Expired” error is a general issue and may have seveal causes. Increasing the TimeOut can solve some of them but not all.
You may also refer to the following links to troubleshoot and fix the error
http://techielion.blogspot.com/2007/01/error-timeout-expired-timeout-period.html
Could it be a configuration issue on the server?
How do you make a connection to the database on the production server?
That might be an area worth looking into.
While I don't know the answer I can suggest that for some reason connections are not being closed by you application when run in production. (Stating the obvious)
You might want examine your network configuration between the web server and sql server. High latency networks can cause connections not being closed in time.
Also it might help looking at the performance counters listed in the end of the following msdn article:
http://msdn.microsoft.com/en-us/library/8xx3tyca%28VS.71%29.aspx
Finally, if nothing else helps, I'd get debugger and Enterprise Library source code on production and debug your code inside the enterprise library to find out why connections are not being closed.
Silly question are you properly closing your DataReader? If not this could be the problem and the difference in behaviour between dev and prod can be caused by different garbage collection patterns.
I would disable connection pooling and try to suppress it (heh). Just add ";Pooling=false" to your connection string.
Or, perhaps you could add something like the following 'cleanup' code to your page (which closes any connection left open when the page unloads) - right in the 'using' clause:
System.Web.UI.Page page = HttpContext.Current.Handler as System.Web.UI.Page;
if (page != null) {
page.Unload += (EventHandler)delegate(object s, EventArgs e) {
try {
dbCommand.Connection.Close();
} catch (Exception) {
} finally {
result = null;
}
};
}
Also, make sure you've enabled the 'shared memory' protocoll if your SQL server and IIS are on the same machine (a real performance booster)!

What are some good ways to debug timeouts? (C#)

I'm building a site that runs fine for a few hours, but then *.asmx and *.ashx calls start timing out.
The exception is: "Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool This may have occurred because all pooled connections were in use and max pool size was reached."
I'm using SubSonic as the ORM.
I suspect that the problem is based on a scheduled task that runs every few minutes and hits the database. When I look in SQL Server 2000's "Current Activity", I see there are:
100 processes with the status "sleeping"
100 locks
The 100 processes are from the Application ".Net SqlClient Data Provider" and the command is "AWAITING COMMAND".
So I'm guessing that's the issue . . but how do I troubleshoot it? Does this sound like a deadlock condition in the db? As soon as I
c:\> iisrestart
, everything's fine (for a while).
Thanks - I've just never encountered something like this and am not sure the best way to proceed.
Michael
It could be a duplicate of this problem - Is connection pooling working correctly in Subsonic?
If you're loading objects with Load() instead of LoadAndCloseReader(), each connection will be left open and eventually you'll exhaust the connection pool.
When you call Load() on a collection it will leave the Reader open - make sure you call LoadAndCloseReader() if you want the reader to close off - or use a using block.
It helps to have some source code as well.
I don't know anything about Subsonic, but maybe you are leaking database 'contexts'? I'd check that any database resource is being disposed after you're finished with it...

Categories

Resources