I'm building a site that runs fine for a few hours, but then *.asmx and *.ashx calls start timing out.
The exception is: "Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool This may have occurred because all pooled connections were in use and max pool size was reached."
I'm using SubSonic as the ORM.
I suspect that the problem is based on a scheduled task that runs every few minutes and hits the database. When I look in SQL Server 2000's "Current Activity", I see there are:
100 processes with the status "sleeping"
100 locks
The 100 processes are from the Application ".Net SqlClient Data Provider" and the command is "AWAITING COMMAND".
So I'm guessing that's the issue . . but how do I troubleshoot it? Does this sound like a deadlock condition in the db? As soon as I
c:\> iisrestart
, everything's fine (for a while).
Thanks - I've just never encountered something like this and am not sure the best way to proceed.
Michael
It could be a duplicate of this problem - Is connection pooling working correctly in Subsonic?
If you're loading objects with Load() instead of LoadAndCloseReader(), each connection will be left open and eventually you'll exhaust the connection pool.
When you call Load() on a collection it will leave the Reader open - make sure you call LoadAndCloseReader() if you want the reader to close off - or use a using block.
It helps to have some source code as well.
I don't know anything about Subsonic, but maybe you are leaking database 'contexts'? I'd check that any database resource is being disposed after you're finished with it...
Related
I've written a service that occasionally has to poll a database very often. Usually I'd create a new SqlConnection with the SqlDataAdapter and fire away like this:
var table = new DataTable();
using(var connection = new SqlConnection(connectionString))
{
connection.Open();
using(var adapter = new SqlDataAdapter(selectStatement, connection))
{
adapter.Fill(table);
}
}
However in a heavy load situation (which occurs maybe once a week), the service might actually use up the entire connection pool and the service records the following exception.
System.InvalidOperationException: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
Multiple threads in the service have to access the SQL server for various queries and I'd like as much of them to run in parallel as possible (and that obviously works too well sometimes).
I thought about several possible solutions:
I thought about increasing the connection pool size, however that might just delay the problem.
Then I thought about using a single connection for the service and keep that open for the remainder of the service running, which might a simple option, however it will keep the connection open even if there is no workload to be done and would have to handle connection resets by the server etc. which I do not know the effect of.
Lastly I thought about implementing my own kind of pool that manages the number of concurrent connections and keeps the threads on hold until there is a free slot.
What would be the recommended procedure or is there a best practice way of handling this?
Well the solution in the end was not exactly ideal (fixing the issue on the SQL Server side) so I ended up checking the number of concurrent connections in the job queuing system.
The service will now not create another thread for document generation unless it can guarantee that the connection pool is actually available. The bad bottleneck on the SQL server is still in place, however the service now no longer generates exceptions.
The downside of course is, that the queue gets longer while there is some blocking query executing on the SQL Server, which might delay document generation for a minute or two. So it isn't an ideal solution but a workable one, since the delay isn't critical as the documents aren't needed directly but stored for archival purpose.
The better solution would have been to fix it SQL Server side.
I face the following problem :
Connection Pool has reached the maximum number of connections
I followed all the recommendations. the problem is n't like before but it happens rarely !!
I use the Using statement with all my connections and Readers .
Lately i face the following error , and i had to reset the iis to fix my problem.
Connection Pool has reached the maximum number of connections. at IBM.Data.Informix.IfxConnectionPool.ReportOpenTimeOut()
at IBM.Data.Informix.IfxConnectionPool.Open(IfxConnection connection)
at IBM.Data.Informix.IfxConnPoolManager.Open(IfxConnection connection)
at IBM.Data.Informix.IfxConnection.Open()
at DB_Connection_s.DB_Connection.GetUserSystems(String emp_num)
Now I read about this method ClearAllPools() .but i don't know when to use this method .and if this considered as a good solution to prevent the have to reset the iis to fix the request time out problem ??
You can call ClearAllPools() when you dont have any active connection.
also check out http://www.codeproject.com/Articles/46267/Connection-Pooling-in-ASP-NET
Ensure that your application closes all database connections correctly and consistently.
Ensure that the database is online.
Increase the connection timeout.
The error pattern indicates that connections are "leaked" over a long period. To fix this problem, ensure that your application closes all database connections correctly and consistently.
The exception does not indicate that the database is offline. The exception indicates a connection pool problem.
I have a problem in the code that I have written using .NET.
The problem is that somewhere I have some dodgy database code that means that after some time I get the following error:
Timeout expired. The timeout period
elapsed prior to obtaining a
connection from the pool. This may
have occurred because all pooled
connections were in use and max pool
size was reached.
I know that it is because somewhere I haven't disposed of one of my datareaders or something similar which means it still has the connection open so its not being returned to the pool. I'm having a bit of problem finding where this is happening in my code though.
So my question:
Is there any way to query the connection pool to find out what its in use connections are doing. I'm just looking for a way to find what query is being run to allow me to find the offending piece of code.
For what its worth I don't have permissions to run an activity monitor on the database in question to find out that way.
Is there any way to query the
connection pool to find out what its
in use connections are doing.
No. Not really. The connection pool is something that your application maintains (actually a List<DbConnectionInternal> ) If you really wanted to you could get to the connections in the pool via reflection or if you are debugging, via a local or watch window (see below), but you can't get to what's happening on that connection from there or which object should have called Connection.Close (or Dispose). So that doesn't help
If you're lucky you can execute sp_who or sp_who2 at the moment you get the timeout when you've run out of pooled connections when but its highly likely that most of the results are going to look like this .
SPID Staus Login Hostname Blkby DBname Command ....
---- ------- ----- --------- ----- ------ ----------------
79 sleeping uName WebServer . YourDb AWAITING COMMAND .....
80 sleeping uName WebServer . YourDb AWAITING COMMAND .....
81 sleeping uName WebServer . YourDb AWAITING COMMAND .....
82 sleeping uName WebServer . YourDb AWAITING COMMAND .....
Which means that yes indeed your application has opened a lot of connection and didn't close them and isn't even doing anything with them.
The best way to combat this is to profile your application use the ADO.NET Performance Counters and keep a close eye on NumberOfReclaimedConnections and also do a thorough code review.
If you're really desperate you can Clear the pool when you encounter this problem.
using (SqlConnection cnn = new SqlConnection("YourCnnString"))
{
try
{
cnn.Open();
}
catch (InvalidOperationException)
{
SqlConnection.ClearPool(cnn);
}
cnn.Open();
}
I do however caution you against this because it has the potential of choking your DB server because it allows your application to open as many connections as the server will allow before it just runs out of resources.
Have you tried loading up SSMS and running sp_who2 on the database in question?
USE [SomeDatabase]
EXEC sp_who2
That should show you what's happening at a moment in time.
I have run into a frustrating issue which I originally thought was a connection leak but that does not seem to be the case. The secnario is this: the data access for this application is using the Enterprise Libraries (v4) from Microsoft. All data access calls are wrapped in using statements such as
using (DbCommand dbCommand = db.GetStoredProcCommand("sproc"))
{
db.AddInParameter(dbCommand, "MaxReturn", DbType.Int32, MaxReturn);
...more code
}
Now the index of this application makes 8 calls to the database to load everything and I can bring the application to its knees by refreshing the index about 15 times. It seems that when the the database reaches 113 connections is when I recieve this error. Here is what makes this weird:
I have run similar code with the entlib on high traffic sites and have NEVER had this problem ever.
If I kill all the connections to the database and get the production application back up and running everytime I refresh the application I can run this SQL
SELECT DB_NAME(dbid) as 'Database Name',
COUNT(dbid) as 'Total Connections'
FROM sys.sysprocesses WITH (nolock)
WHERE dbid > 0
GROUP BY dbid
I can see the number of connections actively increasing with each page refresh. Running the same code on my local box with the same connection string does not cause this problem. Further if the production website is down I can fire up the site via Visual Studio and run it fine and the only difference between the two is that the production site has Windows authentication turned on and my local copy doesn't. Turning windows authentication off seems to have no effect on the server.
I have absolutely no clue what is causing this or why the connections are not being disposed of in SQL Server. The EntLib objects do no explose .Close() methods for anything so I can't explictily close the object.
Any thoughts?
Thanks!
Edit
Wow I just noticed that I never actually posted the error message. Oy. The actual connection error is: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
Check that the stored procedure you are executing is not running into a row or table lock. Also if you can possibly try to deploy in another server and check if the application would crawl again.
Also try to increase the maximum allowed connections for your SQL server.
think the “Timeout Expired” error is a general issue and may have seveal causes. Increasing the TimeOut can solve some of them but not all.
You may also refer to the following links to troubleshoot and fix the error
http://techielion.blogspot.com/2007/01/error-timeout-expired-timeout-period.html
Could it be a configuration issue on the server?
How do you make a connection to the database on the production server?
That might be an area worth looking into.
While I don't know the answer I can suggest that for some reason connections are not being closed by you application when run in production. (Stating the obvious)
You might want examine your network configuration between the web server and sql server. High latency networks can cause connections not being closed in time.
Also it might help looking at the performance counters listed in the end of the following msdn article:
http://msdn.microsoft.com/en-us/library/8xx3tyca%28VS.71%29.aspx
Finally, if nothing else helps, I'd get debugger and Enterprise Library source code on production and debug your code inside the enterprise library to find out why connections are not being closed.
Silly question are you properly closing your DataReader? If not this could be the problem and the difference in behaviour between dev and prod can be caused by different garbage collection patterns.
I would disable connection pooling and try to suppress it (heh). Just add ";Pooling=false" to your connection string.
Or, perhaps you could add something like the following 'cleanup' code to your page (which closes any connection left open when the page unloads) - right in the 'using' clause:
System.Web.UI.Page page = HttpContext.Current.Handler as System.Web.UI.Page;
if (page != null) {
page.Unload += (EventHandler)delegate(object s, EventArgs e) {
try {
dbCommand.Connection.Close();
} catch (Exception) {
} finally {
result = null;
}
};
}
Also, make sure you've enabled the 'shared memory' protocoll if your SQL server and IIS are on the same machine (a real performance booster)!
In developing a relatively simple web service, that takes the data provided by a post and records it in a database table, we're getting this error:
Exception caught: The remote server returned an error: (500) Internal Server Er
or.
Stack trace: at System.Net.HttpWebRequest.GetResponse()
on some servers, but no others. The ones that are getting this are the physical machines, the others are virtual, and obviously the physical servers are far more powerful.
As far as we can tell, the problem is that the DB connections aren't being released back to the pools after each query. I'm using the using pattern below:
using (VoteDaoDataContext dao = new VoteDaoDataContext())
{
dao.insert_response_and_update_count(answerVal, swid, agent, geo, DateTime.Now, ip);
dao.SubmitChanges();
msg += "Thank you for your vote.";
dao.Dispose();
}
I added the dao.Dispose() call to ensure that connections are released when the method finishes, but I don't know whether or not it's necessary.
Am I using this pattern correctly? Is there something else I need to do to ensure that connections get returned to the pools correctly?
Thanks!
Your diagnostic information is not good enough. An HTTP/500 isn't enough detail to really tell if your theory is correct. You're going to need to capture a full stack trace in your logging if you want to get to the problem. I think you've jumped to a conclusion here. And no, you do not need that Dispose() before the end of your using{} block. That's what using{} does.
I thought that dispose() call was redundant, but I wanted to be sure.
We're seeing the connection pools saturating in the SQL logs (I can't look at the directly, I'm just a developer, and this stuff's running in a prod environment), and my ops guy said he's seeing connections timing out... and once they time out, the server starts running again, until the next time it saturates the connection pool.
We're going through the process of tweaking the connection pool settings at the moment... I wanted to be certain that I wasn't doing anything wrong, since this is my first time using Linq.
Thanks!