ODP.NET Connection Pooling Parameters - c#

I am trying to configure connection pooling for my .NET application using ODP.NET version 2.111.6.20. The database is Oracle 11.1.
I am using the following connection string in my .NET 2.0 application:
Data Source=prod; User Id=FAKE_USER; Password=FAKE_PASS; Pooling=true; Min Pool Size=2; Max Pool Size=5; Connection Timeout=30;"
According to the documentation the connection pool should initialize with 2 connections and and increment up to 5 connections as needed. It should never get higher than 5 connections.
What I am seeing is the the connections are growing 2 at a time and growing up to 10 connections. I am monitoring the connections in the Oracle database by querying the v$session table so I know the connections are from that specific application originating from my application.
If anyone can help me identify what might be happening in the connection pool inside this application that might be allowing for more than the Max number of connections I would appreciate it.
Sample C# Code
Here is a sample of the code making the calls to the database:
const string connectionString = "Data Source=prod; User Id=FAKE_USER; Password=FAKE_PASS; Pooling=true; Min Pool Size=5; Max Pool Size=5; Connection Timeout=30;";
using (OracleConnection connection = new OracleConnection(connectionString)) {
connection.Open();
using (OracleCommand command = new OracleCommand("ALTER SESSION SET TIME_ZONE='UTC'", connection)) {
command.ExecuteScalar();
}
using (OracleTransaction transaction = connection.BeginTransaction()) {
const string procSql = #"BEGIN P_SERVICES.UPDATE_VERSION(:id, :version, :installDate); END;";
using (OracleCommand command = new OracleCommand(procSql, connection)) {
command.Parameters.Add(new OracleParameter("id", OracleDbType.Varchar2) { Value = id });
command.Parameters.Add(new OracleParameter("version", OracleDbType.Varchar2) { Value = version });
command.Parameters.Add(new OracleParameter("installDate", OracleDbType.TimeStamp) { Value = dateUpdated });
try {
command.ExecuteNonQuery();
} catch (OracleException oe) {
if (Log.IsErrorEnabled) {
Log.ErrorFormat("Update Error: {0}", oe.Message);
}
throw;
}
transaction.Commit();
}
}
}

I have found the reason that the Maximum connections seen in the database is increasing past the number allowed in the connection pool settings in the connection string.
The Application Pool in IIS was configured to have "Maximum number of worker processes" set different than the default of 1. What I have found is that the number of connections seen in the database can grow up to the Max Pool Size * Number of Worker Processes.
So if I have Max Pool Size of 5 and 5 Worker Processes then the total number of connections allowed is 25. So it seems that each Worker Process has it's own instance of a connection pool that is not shared across other worker processes.

You can use this query to monitor your connection counts & statuses. Using this query, I was able to confirm that the connection string settings are working, explanation below.
select COUNT(*) AS Connections
,s.username
,s.status
,s.module
,s.osuser
from V$process p
join V$session s on s.paddr = p.addr
where NOT s.UserName IS NULL
group by s.username
,s.status
,s.module
,s.osuser
I ran this with 2 pages that did a bunch of database retrievals. Here are my differing results:
Max Pool Size=5
I saw fluctuations in the count under the empty module with same username as the webserver. I'm not sure why they showed up under that bucket as well as the webserver.
Max Pool Size=1
When I restricted the pool size, I only ever saw 1 connection for the empty module, and 1 connection for the webserver, but then connections popped up under DBMS_SCHEDULER, which indicates to me that the rest of the retreivals were pending?
I think this proves that the Max Pool Size is working, but I'm not certain.

According to Tom kyte:
A connection is a physical circuit between you and the database. A connection might be
one of many types -- most popular begin DEDICATED server and SHARED server.
Zero, one or more sessions may be established over a given connection to the database
A process will be used by a session to execute statements. Sometimes
there is a one to one relationship between CONNECTION->SESSION->PROCESS (eg: a normal
dedicated server connection). Sometimes there is a one to many from connection to
sessions. A process does
not have to be dedicated to a specific connection or session however, for example when
using shared server (MTS), your SESSION will grab a process from a pool of processes in
order to execute a statement. When the call is over, that process is released back to
the pool of processes.
So running
select username from v$session where username is not null
will show current seesions (not connections)
To see the connections you may use
select username, program from v$process;
A useful book about JDBC and Session VS Connection could be found here

If you absolutely have to fix this, and are willing to get down & dirty with performance counters, this blog post might be of help. At the very least it might help narrow down a discrepency between how many connections Oracle is reporting vs. how many pooled & non-pooled connections .NET claims to have.
http://blog.ilab8.com/2011/09/02/odp-net-pooling-and-connection-request-timed-out/
These counters seem like they would be particularly useful:
NumberOfActiveConnectionPools
NumberOfActiveConnections
NumberOfFreeConnections
NumberOfInactiveConnectionPools
NumberOfNonPooledConnections
NumberOfPooledConnections
NumberOfReclaimedConnections
NumberOfStasisConnections

Related

SqlCommand Times Out, CommandTimeout and ConnectionTimeout both = 0

I'm running a large number of extracts from a SQL database via a C# Script Task within a SSIS package. The connection to the source is acquired from a connection manager within the package:
object rawConnection = Dts.Connections[sqlSpecItems["ConnectionManager"]].AcquireConnection(Dts.Transaction);
SqlConnection connectionFromCM = (SqlConnection)rawConnection;
(splSpecItems is a Dictionary object that supplies the name of the connection manager to use)
The ConnectionTimeout property of the connection manager is set to 0. The connection string generated for the CM is:
Data Source=MyDatabase;User ID=MyUserName;Initial Catalog=MyDatabaseName;Persist Security Info=True;Asynchronous Processing=True;Connect Timeout=0;Application Name=MyPackageApplicationName;
The connection is used to return a SqlDataReader object as follows:
private SqlDataReader GetDataReaderFromQuery(string sqlQueryToExecute)
{
// connect to server
SqlConnection sqlReaderSource = GetSourceSQLConnection();
// create command
SqlCommand sqlReaderCmd = new SqlCommand(sqlQueryToExecute, sqlReaderSource)
{
CommandType = CommandType.Text,
CommandTimeout = 0
};
// execute query to return data to reader
SqlDataReader sqlReader = sqlReaderCmd.ExecuteReader();
return sqlReader;
}
The operation fails with the error message:
Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
Logged start and end times for the operation are typically around 50s apart, but can be up to 120s.
The source database is a Business Central cloud hosted SQL db. The failures occur on a small number of specific extracts, the larger ones (although by no means large in absolute terms, c20k rows). When attempting to query these via SSMS, there is typically a delay as the data has to be fetched from the source into memory, which I suspect to be the reason for the timeout (notwithstanding the setting of timeout = 0 for the connection and command). Note that once the failure has happened once, the data is in memory and so it doesn't repeat if I restart the job.
I've looked at various answers on this site, as well as across other sites. Nothing I have seen so far has given me any idea of what I might try to resolve this problem. Any help would be appreciated.
So, I found the problem. Just in case it helps anyone else, the timeout wasn't throwing because of the reader. Rather, it was the SqlBulkCopy operation downstream of it that I pass the reader into. Adding:
bulkCopy.BulkCopyTimeout = 0;
Has cleared the problem...

SQL Server connection pooling on continuous running application

I've got problem with the connection pulling with continuously running program. The problem occurs when I'm doing a lot of queries (every 4 minutes 5x (querying 3 tables and saving result to one)) to DB in the Tasks. The connection pools run out of max pool connection size. The strange thing about this that I have on DB 100 of AWAITING COMMAND entries for that particular connection string / machine / user entries. My understanding is that AWAITING COMMAND means that this connection can be reused, but from some strange unknown reason to me when running commands from Tasks cannot reuse available connections and they just wait for no one, and after some time got error that I've reached the max pool connections size.
Assumptions so far:
When running commands from tasks DB interpret this as invalid to reuse available connections
Connections aren't closing, but why? Seems to closing them with using keyword. More over that is 100 AWAITING COMMANDS one the DB.
The handlers aren't garbage collected for some reason? But the 100 AC telling sth else.
UPDATE: LOCALDB OBSERVATIONS/SUMMARY:
When I'm trying to replicate this on local DB SQL Server Express this problem happen in very awkward situation. I had to add the Thread.Sleep(600000) to kind a simulate the situation. And eventually after that I was able to get the max pool error, but in this case all connections are open so its rather self explanatory.
In local -> server scenario, I don't think so that I could have 100 connections open in one time, they rather stay open for some reason. When launching this program on the localMachine -> serverDB situation I don't even need to add the Thread.Sleep(600000) in order to crash program.
All those are my assumptions based on observations. I can't think of what casing this in my continuous running service where querying the DB every 4 minutes.
PS. After my full local testing I'm confused if COMMAND AWAITING means that this connection can be reused?
UPDATE 2 Forgot to mention that my initial program can run couple of days before I eventually encounter this max pool error.
Below is the program that can generate this kind of problem:
using System;
using System.Data.SqlClient;
using System.Threading;
using System.Threading.Tasks;
namespace Pooling
{
class Program
{
private static int connectionIterations;
private static string connectionString = "Data Source=localhost;Initial Catalog=localDB;Integrated Security=True";
static void Main(string[] args)
{
try
{
Iterations();
while(true)
{
ConnectionSnowball();
}
}
catch (Exception e)
{
Console.WriteLine(e);
throw;
}
}
private static void ConnectionSnowball()
{
Parallel.For(0, connectionIterations, i =>
{
try
{
Console.WriteLine($"Connection id: {i}");
using (SqlConnection connection = new SqlConnection(connectionString))
{
SqlCommand cmd = new SqlCommand("SELECT 1 FROM test_table", connection);
connection.Open();
cmd.ExecuteNonQuery();
Thread.Sleep(600000);
}
}
catch (Exception e)
{
Console.WriteLine(e);
throw;
}
});
}
private static void Iterations()
{
connectionIterations = 200;
}
}
}
I debugged your code and found no connection leaks. You just have a connection pool overflow. I checked two possible solutions for you.
Disable pooling connections
private static string connectionString = "Data Source=localhost;Initial Catalog=localDB;Integrated Security=True;Pooling=False";
Increase connection pool
private static string connectionString = "Data Source=localhost;Initial Catalog=localDB;Integrated Security=True;Max Pool Size=200";
To test how the connections will increase and decrease before, during and after the ConnectionSnowball() call, you can use this SQL query
select count(1) from sys.dm_exec_sessions where database_id = DB_ID(N'localDB')
More details about connection string parameters
SqlConnection.ConnectionString Property
Other possible solutions is the use of SQL jobs. For this task, this may be a more appropriate solution, since a large number of connections are very resource intensive.
As there are no connection leaks in your code, Did you try Restarting IIS?

SQL Server "The wait operation timed out" when inserting records on remote server, but not locally

There is a weird problem with a deployed Windows application that uses a remote connection string to SQL Server 2012.
When inserting records, the SQL Server times out after a relatively short time saying "The wait operation timed out". I'm not able to debug the deployed application to find out why it is happening and where in the code it is happening.
However, I don't get this error when using the same database on the development machine, with a local connection.
Generally the code used is:
void MapData( SqlTransaction transaction, Dictionary<int, IDataObject> items )
{
foreach ( var i in items )
{
transaction.Save( "CHECKPOINT" );
try
{
ImportItem( transaction, i );
}
catch ( Exception e )
{
transaction.Rollback( "CHECKPOINT" );
}
}
ReportStatus();
}
While this code has been working, I am uncertain about remote connections. We only have this one single case where it does NOT work.
What can it be?
Is there a more solid or performant approach than using Save() and Rollback() in a loop?
I don't want to use TransactionScope to spawn new "child" transactions.
Thanks!
Your transaction is taking too long (not sure if it's committing or rolling back). In order to understand why you'd have to run a trace to get performance metrics.
But to get it working you could increase your timeout. Set the SqlCommand CommandTimeout to a larger value or 0 (no timeout). Also, the connection timeout is used for the transaction timeout - usually an issue only on expensive rollbacks. You specify this in the connection string like Connection Timeout=30.

SQL CPU High - SqlConnection not being closed - Related?

I have a ASP.net based application.
The CPU on the SQL Server box is constantly ~90 - 100%
There are a lot of inneficient queries, which I am currently working on, however, looking at the code from a previous coder, he never seemed to close (or dispose) the SqlConnection
When I run the folloing query, I get around 450 connections that are "Awaiting Command"
SELECT Count(*) FROM
MASTER.DBO.SYSPROCESSES WHERE
DB_NAME(DBID) = 'CroCMS' AND DBID != 0
AND cmd = 'AWAITING COMMAND'
Is this likely to be causing a problem?
I read this and it seems to relate:
http://www.pythian.com/news/1270/sql-server-understanding-and-controlling-connection-pooling-fragmentation/
We are also getting a lot of timeouts, specifically when replication is enabled..
I'm not sure if this is related.. Have disabled replication (transactional) for now and it seems ok..
(This server is a subscriber to our in office Database server)
Would disposing of the SQL connection object help?
Yes, dispose them. Otherwise ignore them for now. Possibly the pool is as large because the statements are slow. I would more suggest:
Fixing the statements.
Check the applicaion that it only uses one connection PER REQUEST (i.e. not open multiple at the same time).
If the problem does not get better after optiomizing SQL - you can revisit the pool.
You should always dispose the command object when your done with it. that way the connection pooling can be used better.
easist is to use the using statment.
using (
var sqlCommand = new SqlCommand(
"storedprocname",
new SqlConnection("connectionstring"))
{ CommandType = CommandType.StoredProcedure })
{
// do what you should.. setting params executing etc etc.
}

How long does it take to create a new database connection to SQL

Can anyone provide me with a ballpark timing (in milliseconds) for how long it takes to establish a new DB connection to SQL from C#. I.e. what is the overhead when a connection pool must create a new connection.
It depends:
time to resolve the DNS name to IP
time to open the TCP socket or the Net pipe (on top of other TCP soket): 3 IP packets usually
time to hanshake the SSL/TLS if encryption is required: ~5 roundtrips plus time to bootstrap the master key exchange if the SSL/TLS key info is not reused (ie. one RSA private key access, which is very expensive)
time to authenticate SQL password for SQL auth (2 roundtrips I believe)
time to authenticate NTLM/Kerberos for integrated auth (1 roundrip to negotiate SPNEGO, 5-6 roundtrips if Kerb ticket is missing, 1 roundtip if the ticket is present, 4-5 roundtrip if NTLM is chosen)
time to authorize the login (lookup metdata, evaluate permissions against login token)
possible time to run any login triggers
time to initiate the connection (1 roundtrip with the inital SET session stuff batch)
Some more esoteric times:
time to open auto-close databases if specified in request (may include a recovery, usualy doesn't)
time to attach database if AtachDBFile is used and db is not already attached
time to start a 'user' instance for SQL 2005 RANU. That is about 40-60 seconds.
Usually you can do some 10-15 new connections per second. If there's an issue (eg. DNS lookup problem, IPsec issued, SSL problems, Kerberos issues) it can easily go up into 10-15 seconds per conection.
By contrast an existing pooled connection only has to execute sp_resetconnection (that is one roundtrip on an existing channel), and even that can be avoided if necessary.
You could always write up some code that opens a connection to your server and time it.
Something like:
StopWatch timer = new StopWatch();
timer.Start();
for(int i=0;i<100;++i)
{
using(SqlConnection conn = new SqlConnection("SomeConnectionString;Pooling=False;"))
{
test.Open();
}
}
timer.Stop();
Console.WriteLine(test.Elapsed.Milliseconds/100);
That would get the average time to open and close 100 connections. Note, I did not run the above code
EDIT: Disabled connection pooling per Richard Szalay's comment. Otherwise, the results would be skewed
It depends on what database you are connecting to and whether it is local or across a network and the network speed if so. If everything is local, then maybe 1 or 2 milliseconds (again it depends on the DBMS). If, more realistically, it is over a LAN, it can still be pretty fast. Here is a simple example connecting to a server on a different subnet (one hop I think):
for ( int i = 0; i < 5; i++ )
{
Stopwatch timeit = new Stopwatch();
timeit.Start();
AdsConnection conn = new AdsConnection( #"Data Source = \\10.24.36.47:6262\testsys\;" );
conn.Open();
timeit.Stop();
Console.WriteLine( "Milliseconds: " + timeit.ElapsedMilliseconds.ToString() );
//conn.Close();
}
The following are the times it printed. The very first one has the cost of loading assemblies and various DLLs. The subsequent ones are only a measurement of the initialization of the new connections:
Milliseconds: 99
Milliseconds: 5
Milliseconds: 4
Milliseconds: 4
Milliseconds: 4

Categories

Resources