DB2Connection Object Looping Open then Close Memory Exceptions - c#

I am using IBM.Data.DB2.DB2DataAdapter object to make multiple connections to different databases on different servers.
My basic loop and connection structure looks like this:
foreach (MyDBObject db in allDBs)
{
//Database Call here for current DB...//Get SQL, then pass it to DB call
QueryCurrentDB(command);
}
Then...
DB2Connection _connection;
Public DataTable QueryCurrentDB(DB2Command command)
{
_connection = new DB2Connection();
DB2DataAdapter adapter = new DB2DataAdapter();
_connection.ConnectionString = string.Format("Server={0};Database={1};UID={2};PWD={3};", _currentDB.DBServer, _currentDB.DBName, _currentDB.UserCode, _currentDB.Password);
command.CommandTimeout = 20;
command.Connection = _connection;
adapter.SelectCommand = command;
_connection.Open();
adapter.Fill(dataTable);
_connection.Close();
_connection.Dispose();
return dataTable;
}
If I have around 20 or so databases on different servers I end up eventually getting this exception. I cannot control the memory allocation for each db instance either.
ERROR [57019] [IBM] SQL1084C The database manager failed to allocate shared memory because an operating system kernel memory limit has been reached. SQLSTATE=57019
The only way I have been able to get around this is to put a thread sleep before each db call, such as:
System.Threading.Thread.Sleep(3000);
I hate this, any suggestions would be appreciated.

In the code posted, the Connection, Command and DataAdapter are all IDisposable indicating they need to be disposed to free allocated resources. But only the DBConnection object is actually disposed. Particularly in a loop such as you have, it is important to dispose of those to prevent leaks.
I dont have the DB2 providers, but they all work pretty much the same, especially in this regard. I'd start by refactoring the code starting with MyDBObject. Rather than just holding onto connection string params, have it create the connection(s) for you:
class MyDBObject
{
private const string fmt = "Server={0};Database={1};UID={2};PWD={3};";
...
public DB2Connection GetConnection()
{
return new DB2Connection(string.Format(fmt,
DBServer,DBName,UserCode,Password));
}
}
Then the loop method:
// this also could be a method in MyDbObject
public DataTable QueryCurrentDB(string SQL)
{
DataTable dt = new DataTable();
using (DB2Connection dbcon = currentDB.GetConnection())
using (DB2Command cmd = new DB2Command(SQL, dbcon))
{
cmd.CommandTimeout = 20;
dbcon.Open();
dt.Load(cmd.ExecuteReader());
}
return dt;
}
Most importantly, note that the IDisposable objects are all enclosed in a using block. This will dispose (and close) the target and release any resources allocated.
You dont need a DataAdapter to fill a table. Omitting it means one less IDisposable thing created.
Rather than passing in the command, pass in the SQL. This allows you to also create, use and dispose of the DBCommand object.
If there is a chance of 2 tables in the same DB getting polled, I'd refactor further to make it possible to fill both tables on the same connection.
Before: 2 out of 3 objects were not being disposed (per iteration!)
After: 2 out of 2 objects are disposed.
I suspect the culprit was the DBCommand object (similar to this question), but it could be a combination of them.
Putting the thread to sleep (probably) works because it gives GC a chance to catch up on cleanup. You are probably not out of the woods yet. The link above was running into problems at 400 iterations; 20 or even 40 (20*2 objects) seems like a very small number to exhaust resources.
So, I suspect other parts of the code are also failing to dispose properly and that loop is just the straw which breaks the camel's back. Look for other loops and DB objects being used and be sure to dispose of them. Basically, anything which has a Dispose() method ought to be used in a using block.

Related

What is the benefit of caching an IReader?

Reviewing some legacy code, there is a commonally used table that gets updated very infrequently.
To save having to constantly go to the database to get the same data each time, it seems like the developer was trying to cache the data. The code looks like this:
private static IDataReader _cachedCheckList;
public override IDataReader GetDataReader()
{
if (_cachedCheckList == null)
{
using (var oneTimeRead = base.GetDataReader())
{
_cachedCheckList = new CachedDataReader(oneTimeRead);
}
}
return _cachedCheckList ?? base.GetDataReader();
}
Then elsewhere in the system the function that uses this follows the pattern of:
IDataReader reader = new CheckList().GetDataReader();
while (reader.Read())
{
[snip]
}
By loading the IReader into memory, I don't think that this provides much in the way of a performance increase.
I'm trying to understand the developers reason for this code. What is the benefit of caching the IReader?
Update: The CachedDataReader() method is basically:
SqlConnection connection = new SqlConnection(ConnectionString);
connection.Open();
var sqlCommand = new SqlCommand(commandText, connection)
command.CommandType = CommandType.StoredProcedure;
return command.ExecuteReader();
I'd not seen anyone cache a DataReader before and was wondering there was a good reason to do this before refactoring the code.
May be
they used DataReader cache for following reasons and they are as follows :-
DataReader readonly and forward only. It fetches the record from databse and stores in the network buffer and gives whenever requests. DataReader release the records as query executes and do not wait for the entire query to execute. Therefore it is very fast as compare to the dataset. It releases only when read method is called.
DataReader should not be Cached. You should fetch data in DataSet or DataTable and then use:
Cache["Data"] = DataTable;
Never cache DataReader objects. Because a DataReader object holds an open connection to the database, caching the object extends the lifetime of the connection, affecting other users of the database. Also, because theDataReader is a forward-only stream of data, after a client has read the information, the information cannot be accessed again. Caching it would be futile.
Caching DataReader objects disastrously affects the scalability of your applications. You may hold connections open and eventually cache all available connections, making the database unusable until the connections are closed. Never cache DataReader objects no matter what caching technology you are using.
Above quotes taken from https://forums.asp.net/post/3224692.aspx

C# Closing Database Connections

I need a to get a bit of understanding in this, When you open a connection to a Database can you leave it open?
How does this connection close?
Is it good practise or bad practice?
Currently I have a request to a database that works no problem
oCON.Open();
oCMD.ExecuteNonQuery();
oCON.Close();
However Some of the examples that I have seen are something like this with no database close.
oCON.Open();
oCMD.ExecuteNonQuery();
How would this connection get closed?
Is this bad practice?
I was looking for a duplicate, as this seems to be a common question. The top answer I found is this one, however, I don't like the answer that was given.
You should always close your connection as soon as you're done with it. The database has a finite number of connections that it allows, and it also takes a lot of resources.
The "old school" way to ensure the close occurred was with a try/catch/finally block:
SqlConnection connection;
SqlCommand command;
try
{
// Properly fill in all constructor variables.
connection = new SqlConnection();
command = new SqlCommand();
connection.Open();
command.ExecuteNonQuery();
// Parse the results
}
catch (Exception ex)
{
// Do whatever you need with exception
}
finally
{
if (connection != null)
{
connection.Dispose();
}
if (command != null)
{
command.Dispose();
}
}
However, the using statement is the preferred way as it will automatically Dispose of the object.
try
{
using (var connection = new SqlConnection())
using (var command = new SqlCommand())
{
connection.Open();
command.ExecuteNonQuery();
// Do whatever else you need to.
}
}
catch (Exception ex)
{
// Handle any exception.
}
The using statement is special in that even if an exception gets thrown, it still disposes of the objects that get created before the execution of the code stops. It makes your code more concise and easier to read.
As mentioned by christophano in the comments, when your code gets compiled down to IL, it actually gets written as a try/finally block, replicating what is done in the above example.
You want your SqlConnection to be in a using block:
using(var connection = new SqlConnection(connectionString))
{
...
}
That ensures that the SqlConnectionwill be disposed, which also closes it.
From your perspective the connection is closed. Behind the scenes the connection may or may not actually be closed. It takes time and resources to establish a SQL connection, so behind the scenes those connections aren't immediately closed. They're kept open and idle for a while so that they can be reused. It's called connection pooling. So when you open a connection, you might not really be opening a new connection. You might be retrieving one from the connection pool. And when you close it, it doesn't immediately close, it goes back to the pool.
That's all handled behind the scenes and it doesn't change what we explicitly do with our connections. We always "close" them as quickly as possible, and then the .NET Framework determines when they actually get closed. (It's possible to have some control over that behavior but it's rarely necessary.)
Take a look at the Repository Pattern with Unit of Work.
A connection context should be injected into the class which operates commands to the database.
A sql execution class - like a repository class represents - should not create a connection. It is not testable and hurts the paradigm of SRP.
It should accept an IDbConnection object like in the constructor. The repository should not take care if behind the IDbConnection is an instance of SqlConnection, MysqlConnection or OracleConnection.
All of the ADO.NET connection objects are compatible to IDbConnection.

Using MySql in ASP.NET: Does closing a connection really release table locks?

I'm working on an ASP.NET application where, as part of some logic, I want to lock some tables and do work on them. The method runs in a separate thread running as a kind of background task, spawned via a Task. The problem comes in with the error handling...
The code looks more or less like this:
MySqlConnection connection = new MySqlConnection(ConfigurationManager.AppSettings["prDatabase"]);
try
{
connection.Open();
MySqlCommand lock_tables = new MySqlCommand(Queries.lockTables(), connection);
lock_tables.ExecuteNonQuery();
// do a bunch of work here
MySqlCommand unlock_tables = new MySqlCommand(Queries.unlockTables(), connection);
unlock_tables.ExecuteNonQuery();
}
catch (MySqlException mex)
{
// Mostly error logging here
}
finally
{
connection.Close();
}
Pretty simple stuff. Everything works fine and dandy assuming nothing goes wrong. That's a terrible assumption to make, though, so I deliberately set up a situation where things would foul up in the middle and move to the finally block.
The result was that my table locks remained until I closed the app, which I learned by trying to access the tables with a different client once the method completed. Needless to say this isn't my intention, especially since there's another app that's supposed to access those tables once I'm done with them.
I could quickly fix the problem by explicitly releasing the locks before closing the connection, but I'm still left curious about some things. Everything I've read before has sworn that closing a connection should implicitly release the table locks. Obviously in this case it isn't. Why is that? Does connection.Close() not actually completely close the connection? Is there a better way I should be closing my connections?
Try wrapping your Connection and MySqlCommand instance in a using statement. That will release the objects as soon as it leaves the brackets.
using(MySqlConnection conn = new MySqlConnection(connStr))
{
conn.Open();
using(MySqlCommand command = new MySqlCommand("command to execute",conn))
{
//Code here..
}
}

Can an NHibernate session have two data readers open in separate threads?

I'd like to know the correct approach for running two simultaneous queries using NHibernate. Right now, I have a single ISession object that I use for all my queries:
session = sessionFactory.OpenSession();
In one thread, I'm loading some data which takes 10-15 seconds, but I don't need it right away so I don't want to block the entire program while it's loading:
IDbCommand cmd = session.Connection.CreateCommand();
cmd.CommandType = CommandType.TableDirect;
cmd.CommandText = "RecipesForModelingGraph";
IDataReader reader = cmd.ExecuteReader();
while (reader.Read())
{
// Do stuff
}
reader.Close();
This works fine, however in another thread I might be running a query such as:
var newBlah = new Blah();
session.Save(newBlah);
When the above transaction commits, I occasionally get an exception:
Additional information: There is already an open DataReader associated
with this Command which must be closed first.
Now, I thought maybe this was because I was running everything in the same transaction. So, I surrounded all my loading code with:
using (ITransaction transaction = session.BeginTransaction(IsolationLevel.Serializable))
{
// Same DataReader code as above
}
However, the problem has not gone away. I'm thinking maybe I need each thread to have its own ISession object. Is this the correct approach, or am I doing something wrong. Note, I only want a single open connection to the database. Also, keep in mind the background thread is only loading data and nothing else, so I'm not worried about isolation levels and data changing as its being read.
The session is tied to the thread and the Commands created are linked to the sessions connection object. So yes, if a commit or close is executed while an open reader exists you will get an exception.
You could Join() your threads and wait until all are complete before closing/committing.

MonoTouch & SQLite - Cannot open database after previous successful connections

I am having difficulty in reading data from my SQLite database from MonoTouch.
I can read and write without any difficulty for the first few screens and then suddenly I am unable to create any further connections with the error:
Mono.Data.Sqlite.SqliteException: Unable to open the database file
at Mono.Data.Sqlite.SQLite3.Open (System.String strFilename, SQLiteOpenFlagsEnum flags, Int32 maxPoolSize, Boolean usePool) [0x0007e] in /Developer/MonoTouch/Source/mono/mcs/class/Mono.Data.Sqlite/Mono.Data.Sqlite_2.0/SQLite3.cs:136
at Mono.Data.Sqlite.SqliteConnection.Open () [0x002aa] in /Developer/MonoTouch/Source/mono/mcs/class/Mono.Data.Sqlite/Mono.Data.Sqlite_2.0/SQLiteConnection.cs:888
I ensure that i dispose and close every connection each time i use it but still i have this problem. For example:
var mySqlConn = new SqliteConnection(GlobalVars.connectionString);
mySqlConn.Open();
SqliteCommand mySqlCommand = new SqliteCommand(SQL, mySqlConn);
mySqlCommand.ExecuteNonQuery();
mySqlConn.Close();
mySqlCommand.Dispose();
mySqlConn.Dispose();
I'm guessing that I'm not closing the connections correctly. Any help would be greatly appreciated.
I'm pretty sure you guess is right. However it's pretty hard to guess what went wrong (e.g. what's defined in your connectionString will affect how Sqlite is initialized and will work).
From your example you seem to be disposing the SqliteConnection correctly but things could still go wrong. E.g. if some code throws an exception (and you catch them somewhere) then the Dispose call might never be called. It would be safer to do something like:
using (var mySqlConn = new SqliteConnection(GlobalVars.connectionString) {
mySqlConn.Open();
using (SqliteCommand mySqlCommand = new SqliteCommand(SQL, mySqlConn)) {
mySqlCommand.ExecuteNonQuery();
// work with the data
}
mySqlConn.Close();
}
That would ensure that the automagically finally clauses will dispose of the instance you create.
Also you might want to consider reusing your (first) connection instance, e.g. opening it once and re-use it everywhere in your application. OTOH you need to be aware of threading in this case (by default, you can change it, each connection is only safe to use on the thread that has created it).
Reusing could help your app performance but it also does not really fix your issue (but it might hide it). So I suggest you try to debug this first:
Using MonoDevelop you can set a breakpoint on line #136 on the /Developer/MonoTouch/Source/mono/mcs/class/Mono.Data.Sqlite/Mono.Data.Sqlite_2.0/SQLite3.cs file (which is included with your MonoTouch installation) to see the actual n error code (before it gets translated to a string).
You can also set breakpoints on the dispose code to ensure it gets executed (and does not return errors). The number of connection creations and disposals should match. If not then use the Call Stack to see who's opening without closing.
I would suggest using the "using" block..That will make sure that everything is disposed off correctly and also that you are not closing connections when it is already closed..
using (SqliteConnection conn = new SqliteConnection(GlobalVars.connectionString))
{
conn.Open ();
SqliteCommand command = new SqliteCommand (conn);
.............
}
OK - i've got it working now by moving the close and dispose into a "finally".
var mySqlConn = new SqliteConnection (GlobalVars.connectionString);
mySqlConn.Open ();
try {
// CODE HERE
} finally {
mySqlConn.Close();
mySqlConn.Dispose();
}

Categories

Resources