Slow - Open and close connection to SQL Server - c#

I have an ASP.NET MVC application running on Windows Azure with SQL Azure. The application uses SqlClient to connect to the database. I'm always using the same connectionString (To use connection pool):
using (SqlConnection conn = new SqlConnection("Static Connection String"))
{
using (var command = conn.CreateCommand())
{
conn.Open();
return command.ExecuteNonQuery();
}
}
I noticed that there is a considerable time between the opening of the connections that leaving the slow implementation. For example, if I have a page with four selectlist, the application needs to open the connection four times to fill the lists. If you do this with one command, returning all lists, the performance is incredible, but when I open and close the connection to get the result lists separately the performance falls too.
With a windows forms application that does not happen.
My doubt is, is there any limitation to the environment in which I am running?

The problem is that you execute 4 queries, not that you open 4 connections - you don't. Connection pooling means that you reuse the same connection.
You still have to send a request to the server though, wait and retrieve the answer. It's the 4 roundtrips that kill performance. 4 queries will be up to 4 times slower than a single query, no matter what. If the data retrieved is small, the roundtrip overhead is way more expensive than the query itself.
You should try to reduce database calls or eliminate them altogether:
Use batching to combine multiple queries into a single call.
Use caching for lookup data so that you don't have to retrieve the same selections all the time.
Finally, use output caching to avoid rendering the same page if the request parameters don't change.

Related

Is it possible to limit the total number of database connections in .NET?

I have an API server that interacts with an Azure-hosted PostgreSQL database instance. There are a large number (>50) of postgres databases on that one postgres server. Any given API request may have to interact with any given database.
Unfortunately, our Azure plan for Postgres only allows 50 connections. I regularly have requests fail because Postgres won't accept more. My ADO.NET connection pool is still holding onto database connections for recently used databases, while connections to other databases error out.
I've tried setting the Max Pool Size on my connection strings, but it appears that the connection pool limit is applied per database, not per server. I still need as much pooling as I can get, opening new connections can take >1500ms, which is beyond my SLA if it happens on every request.
Is there a way to ask .NET to never open more than 50 database connections, either per server or total?
Set Max Pool Size and instead of connecting to a separate database connect to the same database on the server and then execute the \connect statement to change to the desired database. The following code fragment demonstrates creating an initial connection to the master database and then switching to the desired database specified in the databaseName string variable.
// Assumes that command is a NpgsqlCommand object and that
// connectionString connects to master.
command.Text = "\connect DatabaseName";
using (NpgsqlConnection connection = new NpgsqlConnection(
connectionString))
{
connection.Open();
command.ExecuteNonQuery();
}

C# SqlConnections using up entire connection pool

I've written a service that occasionally has to poll a database very often. Usually I'd create a new SqlConnection with the SqlDataAdapter and fire away like this:
var table = new DataTable();
using(var connection = new SqlConnection(connectionString))
{
connection.Open();
using(var adapter = new SqlDataAdapter(selectStatement, connection))
{
adapter.Fill(table);
}
}
However in a heavy load situation (which occurs maybe once a week), the service might actually use up the entire connection pool and the service records the following exception.
System.InvalidOperationException: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
Multiple threads in the service have to access the SQL server for various queries and I'd like as much of them to run in parallel as possible (and that obviously works too well sometimes).
I thought about several possible solutions:
I thought about increasing the connection pool size, however that might just delay the problem.
Then I thought about using a single connection for the service and keep that open for the remainder of the service running, which might a simple option, however it will keep the connection open even if there is no workload to be done and would have to handle connection resets by the server etc. which I do not know the effect of.
Lastly I thought about implementing my own kind of pool that manages the number of concurrent connections and keeps the threads on hold until there is a free slot.
What would be the recommended procedure or is there a best practice way of handling this?
Well the solution in the end was not exactly ideal (fixing the issue on the SQL Server side) so I ended up checking the number of concurrent connections in the job queuing system.
The service will now not create another thread for document generation unless it can guarantee that the connection pool is actually available. The bad bottleneck on the SQL server is still in place, however the service now no longer generates exceptions.
The downside of course is, that the queue gets longer while there is some blocking query executing on the SQL Server, which might delay document generation for a minute or two. So it isn't an ideal solution but a workable one, since the delay isn't critical as the documents aren't needed directly but stored for archival purpose.
The better solution would have been to fix it SQL Server side.

Should SqlConnections be long or shortlived with EF

I have a few DbContextes that connects to the same database in the same application.
I noticed that EF6 has a new constructor: https://msdn.microsoft.com/en-us/library/gg696604(v=vs.113).aspx
My question is then, lets say I hook up my DI framework to create a SqlConnection for each request and passes this to each of the DbContexts with this new constructor instead, would that be the correct way to go about it? Or should the Sql connection be longlived and not per request?
public async Task<SqlConnection> GetOpenedConnectionAsync()
{
_connection = new SqlConnection(_config.AscendDataBaseConnectionString);
await _connection.OpenAsync(_cancel.Token);
return _connection;
}
Register above per application lifetime or per request lifetime?
Depends on your use case but in general i would highly discourage Singleton scope.
Generally the cost of creating a new connection and tearing it down is low, unless there is a long packet delay between Server and Db (e.g. mobile) but if servers are close this is < 5ms.
If lets say you have 1 database, used by a thousand servers (load balancing or whatever), if all those servers always kept an open connection you may run into issues, but if you had each one open and close connections as and when needed, this probably would work.
If you have 1 database, and 1 or 2 servers, you could have a single connection (to save a small amount of time per request) but there are pitfalls and i would HIGHLY discourage it because:
If you open a transaction, no other query will be able to run until that transaction finishes as there can only be 1 transaction at any time per connection. E.g. User A tries to list all Customers (takes 5 seconds), this means no other query can run until you get all the customers back.
If a transactions gets opened, and for whatever reason it does not commit, you will basically loos complete connectivity to the database until that transaction gets rolled back/committed, which may or may not happen.

how to handle ADO.NET DbConnection(s) in long running web service with DbDataAdapters?

Basically what I do now is:
During initialization
create connection and store it
create DbDataAdapters and their commands with the stored connection
call DbDataAdapter.Fill for each
adapter to populate DataTables from database
and when handling requests
insert/update/delete rows in DataTables
call DbDataAdapter.Update at some point. Not necessarily every time (update uses naturally adapter's commands' connection)
Is this the correct way or should I always create a new connection when request arrives, and then assign it to DbDataAdapter.Insert/Update/DeleteCommand.Connection, before calling DbDataAdapter.Update? I'm thinking about issues like reconnecting to db after network/server problem.
Thanks & BR -Matti
Because you mention web services, think about concurrency. What if two or more concurrent requests are processed at your server side.
Is it ok to use the same connection? Is it ok to use the same DataAdapter?
The most probable answer is - it's not, it probably would not work.
Thus, the safest approach is to create a new connection and a new data adapter upon each request.
Since connections are pooled, there should be no issues with "reconnecting" - the pool serves a connection and the handshake was probably performed before. There's then no performance hit.

SQL Exception: "Impersonate Session Security Context" cannot be called in this batch because a simultaneous batch has called it

When opening a connection to SQL Server 2005 from our web app, we occasionally see this error:
"Impersonate Session Security Context" cannot be called in this batch because a simultaneous batch has called it.
We use MARS and connection pooling.
The exception originates from the following piece of code:
protected SqlConnection Open()
{
SqlConnection connection = new SqlConnection();
connection.ConnectionString = m_ConnectionString;
if (connection != null)
{
try
{
connection.Open();
if (m_ExecuteAsUserName != null)
{
string sql = Format("EXECUTE AS LOGIN = {0};", m_ExecuteAsUserName);
ExecuteCommand(connection, sql);
}
}
catch (Exception exception)
{
connection.Close();
connection = null;
}
}
return connection;
}
I found an MS Connect article which suggests that the error is caused when a previous command has not yet terminated before the EXECUTE AS LOGIN command is sent. Yet how can this be if the connection has only just been opened?
Could this be something to do with connection pooling interacting strangely with MARS?
UPDATE: For the short-term we have implemented a workaround by clearing out the connection pool whenever this happens, to get rid of the bad connection, as it otherwise keeps getting handed back to various users. (This now happens a 5-10 times a day with only a small number of simultaneous users, so it is fairly annoying.) But if anyone has any further ideas, we are still looking out for a real solution...
I would say it's MARS rather then pooling
From "Using Multiple Active Result Sets (MARS)"
Applications can have multiple default
result sets open and can interleave
reading from them.
Applications can
execute other statements (for example,
INSERT, UPDATE, DELETE, and stored
procedure calls) while default result
sets are open.
Connection pooling in it's basic form means the connection open/close overhead is minimised, but any connection (until MARS) has one thing going on at any one time. Pooling has been around for some time and just works out of the box.
MARS (I've not used it BTW) introduces overlapping "stuff" going on for any single connection. So it's probably MARS rather than connection pooling is the bigger culprit of the 2.
From "Extending Database Impersonation by Using EXECUTE AS"
When impersonating a principal by
using the EXECUTE AS LOGIN statement,
or within a server-scoped module by
using the EXECUTE AS clause, the scope
of the impersonation is server-wide.
This may explain why MARS is causing it: the same principal in 2 session both running EXECUTE AS.
There may be something in that article of use, or try this:
IF ORIGINAL_LOGIN() = SUSER_SNAME() EXECUTE AS LOGIN = {0};
On reflection and after reading for this answer, I've not convinced that trying to change execution context for each session (MARS) in one connections is a good idea...
Don't blame connection pooling - MARS is quite notorious for wreaking a havoc. It's not entirely it's blame but it's kind of half and half. The key thing to remember is that MARS is designed, and only works with "normal" DB use (meaning, regular CRUD stuff no admin batches). Any commands that have a wide effect on DB engine can trip MARS even if it's just one connection and single threaded (like running a setup batch to create tables or a nested transaction).
Having said that, one can easily just blame MARS, but it works perfecly fine for normal CRUD scenarios which are like 99% (and things with low efficiencey like ORM-s and LINQ depend on it for life). Meaning that it's important for people to learn that if they want to hack SQL through a connection they can't use MARS. For example I had a setup code that was creating whole DB from scratch, beceuse it's very convenient for deployment, but it was sharing connection sting with web service it was deploying - oops :-) Took me a few days of digging to learn my lesson. So I just maintain the separation of concerns (which is always good) and problems went away.
Have you tried to use a revert at the end of your sql statement?
http://msdn.microsoft.com/en-us/library/ms178632.aspx
I always do this to just make sure the current context is back to normal.

Categories

Resources