I've written a service that occasionally has to poll a database very often. Usually I'd create a new SqlConnection with the SqlDataAdapter and fire away like this:
var table = new DataTable();
using(var connection = new SqlConnection(connectionString))
{
connection.Open();
using(var adapter = new SqlDataAdapter(selectStatement, connection))
{
adapter.Fill(table);
}
}
However in a heavy load situation (which occurs maybe once a week), the service might actually use up the entire connection pool and the service records the following exception.
System.InvalidOperationException: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
Multiple threads in the service have to access the SQL server for various queries and I'd like as much of them to run in parallel as possible (and that obviously works too well sometimes).
I thought about several possible solutions:
I thought about increasing the connection pool size, however that might just delay the problem.
Then I thought about using a single connection for the service and keep that open for the remainder of the service running, which might a simple option, however it will keep the connection open even if there is no workload to be done and would have to handle connection resets by the server etc. which I do not know the effect of.
Lastly I thought about implementing my own kind of pool that manages the number of concurrent connections and keeps the threads on hold until there is a free slot.
What would be the recommended procedure or is there a best practice way of handling this?
Well the solution in the end was not exactly ideal (fixing the issue on the SQL Server side) so I ended up checking the number of concurrent connections in the job queuing system.
The service will now not create another thread for document generation unless it can guarantee that the connection pool is actually available. The bad bottleneck on the SQL server is still in place, however the service now no longer generates exceptions.
The downside of course is, that the queue gets longer while there is some blocking query executing on the SQL Server, which might delay document generation for a minute or two. So it isn't an ideal solution but a workable one, since the delay isn't critical as the documents aren't needed directly but stored for archival purpose.
The better solution would have been to fix it SQL Server side.
Related
I have a few DbContextes that connects to the same database in the same application.
I noticed that EF6 has a new constructor: https://msdn.microsoft.com/en-us/library/gg696604(v=vs.113).aspx
My question is then, lets say I hook up my DI framework to create a SqlConnection for each request and passes this to each of the DbContexts with this new constructor instead, would that be the correct way to go about it? Or should the Sql connection be longlived and not per request?
public async Task<SqlConnection> GetOpenedConnectionAsync()
{
_connection = new SqlConnection(_config.AscendDataBaseConnectionString);
await _connection.OpenAsync(_cancel.Token);
return _connection;
}
Register above per application lifetime or per request lifetime?
Depends on your use case but in general i would highly discourage Singleton scope.
Generally the cost of creating a new connection and tearing it down is low, unless there is a long packet delay between Server and Db (e.g. mobile) but if servers are close this is < 5ms.
If lets say you have 1 database, used by a thousand servers (load balancing or whatever), if all those servers always kept an open connection you may run into issues, but if you had each one open and close connections as and when needed, this probably would work.
If you have 1 database, and 1 or 2 servers, you could have a single connection (to save a small amount of time per request) but there are pitfalls and i would HIGHLY discourage it because:
If you open a transaction, no other query will be able to run until that transaction finishes as there can only be 1 transaction at any time per connection. E.g. User A tries to list all Customers (takes 5 seconds), this means no other query can run until you get all the customers back.
If a transactions gets opened, and for whatever reason it does not commit, you will basically loos complete connectivity to the database until that transaction gets rolled back/committed, which may or may not happen.
I have an client application that connects to the DB via the using clause.
using (SqlConnection sqlConn = new SqlConnection(ConnectionString))
{
sqlConn.Open();
//SQLcommand codes
}
I know this will ensure the sqlConnn.Close() and sqlConn.Dispose() to be called. However, for each client after running this code I still see some of the SPID on SQLServer in sleeping mode such as:
60 0 sleeping sa AVENGER 0 Xmark AWAITING COMMAND 0
61 0 sleeping sa AVENGER 0 Xmark AWAITING COMMAND 0
62 0 sleeping sa AVENGER 0 Xmark AWAITING COMMAND 0
I know this is because I use connection pooling in my connection, that is why they are in sleep mode and to reuse again in further command. I see these processes get flushed out over time (after like 10 minutes or so) if I don't do anything with them.
My question is: Is there a setting either from C# or in SQLServer2008 that will reduce this time to like 2 minutes or so?
The problem I face is my max pooling connection limit reaches quickly if I have many clients connect to the Database around a short period of time. I realize I can fix it by increase my connection pooling, however I think that is like upgrade from a smaller to a bigger bowl to contain water from a leaked roof.
I searched on MSDN and came across the ClearPool() that will let me implicitly remove the SPID from the pool but I think that defeats the purpose of connection pooling and it is not really clean.
any help is greatly appreciated.
As you know the Sql server may not close the connection even when you call sqlconn.Close(), and will use the same connection for any other client connecting with the same connection string.
Ideally, Instead of looking to decrease this time, you should use a data access layer class that must act as helper class to create and manage connections.
Using such approach will avoid pooling issues and I have seen this approach used in good application architectures.
You should modify your code to this approach instead.
You are looking for the 'Idle Time-out' setting. It is configured for the application pool in IIS. Refer to this link
Basically what I do now is:
During initialization
create connection and store it
create DbDataAdapters and their commands with the stored connection
call DbDataAdapter.Fill for each
adapter to populate DataTables from database
and when handling requests
insert/update/delete rows in DataTables
call DbDataAdapter.Update at some point. Not necessarily every time (update uses naturally adapter's commands' connection)
Is this the correct way or should I always create a new connection when request arrives, and then assign it to DbDataAdapter.Insert/Update/DeleteCommand.Connection, before calling DbDataAdapter.Update? I'm thinking about issues like reconnecting to db after network/server problem.
Thanks & BR -Matti
Because you mention web services, think about concurrency. What if two or more concurrent requests are processed at your server side.
Is it ok to use the same connection? Is it ok to use the same DataAdapter?
The most probable answer is - it's not, it probably would not work.
Thus, the safest approach is to create a new connection and a new data adapter upon each request.
Since connections are pooled, there should be no issues with "reconnecting" - the pool serves a connection and the handshake was probably performed before. There's then no performance hit.
When opening a connection to SQL Server 2005 from our web app, we occasionally see this error:
"Impersonate Session Security Context" cannot be called in this batch because a simultaneous batch has called it.
We use MARS and connection pooling.
The exception originates from the following piece of code:
protected SqlConnection Open()
{
SqlConnection connection = new SqlConnection();
connection.ConnectionString = m_ConnectionString;
if (connection != null)
{
try
{
connection.Open();
if (m_ExecuteAsUserName != null)
{
string sql = Format("EXECUTE AS LOGIN = {0};", m_ExecuteAsUserName);
ExecuteCommand(connection, sql);
}
}
catch (Exception exception)
{
connection.Close();
connection = null;
}
}
return connection;
}
I found an MS Connect article which suggests that the error is caused when a previous command has not yet terminated before the EXECUTE AS LOGIN command is sent. Yet how can this be if the connection has only just been opened?
Could this be something to do with connection pooling interacting strangely with MARS?
UPDATE: For the short-term we have implemented a workaround by clearing out the connection pool whenever this happens, to get rid of the bad connection, as it otherwise keeps getting handed back to various users. (This now happens a 5-10 times a day with only a small number of simultaneous users, so it is fairly annoying.) But if anyone has any further ideas, we are still looking out for a real solution...
I would say it's MARS rather then pooling
From "Using Multiple Active Result Sets (MARS)"
Applications can have multiple default
result sets open and can interleave
reading from them.
Applications can
execute other statements (for example,
INSERT, UPDATE, DELETE, and stored
procedure calls) while default result
sets are open.
Connection pooling in it's basic form means the connection open/close overhead is minimised, but any connection (until MARS) has one thing going on at any one time. Pooling has been around for some time and just works out of the box.
MARS (I've not used it BTW) introduces overlapping "stuff" going on for any single connection. So it's probably MARS rather than connection pooling is the bigger culprit of the 2.
From "Extending Database Impersonation by Using EXECUTE AS"
When impersonating a principal by
using the EXECUTE AS LOGIN statement,
or within a server-scoped module by
using the EXECUTE AS clause, the scope
of the impersonation is server-wide.
This may explain why MARS is causing it: the same principal in 2 session both running EXECUTE AS.
There may be something in that article of use, or try this:
IF ORIGINAL_LOGIN() = SUSER_SNAME() EXECUTE AS LOGIN = {0};
On reflection and after reading for this answer, I've not convinced that trying to change execution context for each session (MARS) in one connections is a good idea...
Don't blame connection pooling - MARS is quite notorious for wreaking a havoc. It's not entirely it's blame but it's kind of half and half. The key thing to remember is that MARS is designed, and only works with "normal" DB use (meaning, regular CRUD stuff no admin batches). Any commands that have a wide effect on DB engine can trip MARS even if it's just one connection and single threaded (like running a setup batch to create tables or a nested transaction).
Having said that, one can easily just blame MARS, but it works perfecly fine for normal CRUD scenarios which are like 99% (and things with low efficiencey like ORM-s and LINQ depend on it for life). Meaning that it's important for people to learn that if they want to hack SQL through a connection they can't use MARS. For example I had a setup code that was creating whole DB from scratch, beceuse it's very convenient for deployment, but it was sharing connection sting with web service it was deploying - oops :-) Took me a few days of digging to learn my lesson. So I just maintain the separation of concerns (which is always good) and problems went away.
Have you tried to use a revert at the end of your sql statement?
http://msdn.microsoft.com/en-us/library/ms178632.aspx
I always do this to just make sure the current context is back to normal.
I'm building a site that runs fine for a few hours, but then *.asmx and *.ashx calls start timing out.
The exception is: "Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool This may have occurred because all pooled connections were in use and max pool size was reached."
I'm using SubSonic as the ORM.
I suspect that the problem is based on a scheduled task that runs every few minutes and hits the database. When I look in SQL Server 2000's "Current Activity", I see there are:
100 processes with the status "sleeping"
100 locks
The 100 processes are from the Application ".Net SqlClient Data Provider" and the command is "AWAITING COMMAND".
So I'm guessing that's the issue . . but how do I troubleshoot it? Does this sound like a deadlock condition in the db? As soon as I
c:\> iisrestart
, everything's fine (for a while).
Thanks - I've just never encountered something like this and am not sure the best way to proceed.
Michael
It could be a duplicate of this problem - Is connection pooling working correctly in Subsonic?
If you're loading objects with Load() instead of LoadAndCloseReader(), each connection will be left open and eventually you'll exhaust the connection pool.
When you call Load() on a collection it will leave the Reader open - make sure you call LoadAndCloseReader() if you want the reader to close off - or use a using block.
It helps to have some source code as well.
I don't know anything about Subsonic, but maybe you are leaking database 'contexts'? I'd check that any database resource is being disposed after you're finished with it...