Lock on Static List or access by Key - c#

Please give expert opinion, refer to below static sorted list based on key value pair.
Method1 for close connection uses approach of accessing sorted list using key.
Method2 for close connection uses lock statement on the Sorted List and access it by index.
Please guide which approach is better as thousands of users simultaneously creating thousands of connections on web application. Note, accessing by index without locking can raise Index out of bound exception.
internal class ConnA
{
static internal SortedList slCons = new SortedList();
internal static bool CreateCon(string ConnID)
{
string constring = "sqlconnectionstring_containing_DataSource_UserInfo_InitialCatalog";
SqlConnection objSqlCon = new SqlConnection(constring);
objSqlCon.Open();
bool connSuccess = (objSqlCon.State == ConnectionState.Open) ? true : false;
if (connSuccess && slCons.ContainsKey(ConnID) == false)
{
slCons.Add(ConnID, objSqlCon);
}
return connSuccess;
}
//Method1
internal static void CloseConnection(string ConnID)
{
if (slCons.ContainsKey(ConnID))
{
SqlConnection objSqlCon = slCons[ConnID] as SqlConnection;
objSqlCon.Close();
objSqlCon.Dispose();
objSqlCon.ResetStatistics();
slCons.Remove(ConnID);
}
}
//Method2
internal static void CloseConnection(string ConnID)
{
lock (slCons)
{
int nIndex = slCons.IndexOfKey(ConnID);
if (nIndex != -1)
{
SqlConnection objSqlCon = (SqlConnection)slCons.GetByIndex(nIndex);
objSqlCon.Close();
objSqlCon.Dispose();
objSqlCon.ResetStatistics();
slCons.RemoveAt(nIndex);
}
}
}
internal class UserA
{
public string ConnectionID { get { return HttpContext.Current.Session.SessionID; } }
private ConnA objConnA = new objConnA();
public void ConnectDB()
{
objConnA.CreateCon(ConnectionID));
}
public void DisConnectDB()
{
objConnA.CloseConnection(ConnectionID));
}
}

Access to the SortedList isn't thread safe.
In CreateCon, two threads could access this simultaneously:
if (connSuccess && slCons.ContainsKey(ConnID) == false)
Both threads could determine that the key isn't present, and then both threads try to add it, so that one of them fails.
In method 2:
When this is called - slCons.RemoveAt(nIndex); - the lock guarantees that another call to the same method won't remove another connection, which is good. But nothing guarantees that another thread won't call CreateCon and insert a new connection string, changing the indexes so that nIndex now refers to a different item in the collection. You would end up closing, disposing, and deleting the wrong connection string, likely one that another thread was still using.
It looks like you're attempting an orchestration which ensures that a single connection string will be used across multiple operations. But there's no need to introduce that complication. Whatever class or method needs a connection, there's no need for it to collaborate with this collection and these methods. You can just let each of them open a connection when it needs it and dispose the connection when it's done.
That's expensive, but that's why the framework implements connection pooling. From the perspective of your code connections are being created, opened, closed, and disposed.
But behind the scenes, the "closed" connection isn't really closed, at least not right away. It's actually kept open. If, in a brief period, you "open" another connection with the same connection string, you're actually getting the same connection again, which is still open. That's how the number of connections opened and closed is reduced without us having to manually manage it.
That in turn prevents us from having to do what it looks like you're doing. This might be different if we were opening a transaction on a connection, and then we had to ensure that multiple operations were performed on the same connection. But even then it would likely be clearer and simpler to pass around the connection, not an ID.

Related

Parallel.ForEach: Best way to save off a collection when its record count gets high?

So I'm running a Parallel.ForEach that basically generates a bunch of data which is ultimately going to be saved to a database. However, since collection of data can get quite large I need to be able to occasionally save/clear the collection so as to not run into an OutOfMemoryException.
I'm new to using Parallel.ForEach, concurrent collections, and locks, so I'm a little fuzzy on what exactly needs to be done to make sure everything works correctly (i.e. we don't get any records added to the collection between the Save and Clear operations).
Currently I'm saying, if the record count is above a certain threshold, save the data in the current collection, within a lock block.
ConcurrentStack<OutRecord> OutRecs = new ConcurrentStack<OutRecord>();
object StackLock = new object();
Parallel.ForEach(inputrecords, input =>
{
lock(StackLock)
{
if (OutRecs.Count >= 50000)
{
Save(OutRecs);
OutRecs.Clear();
}
}
OutRecs.Push(CreateOutputRecord(input);
});
if (OutRecs.Count > 0) Save(OutRecs);
I'm not 100% certain whether or not this works the way I think it does. Does the lock stop other instances of the loop from writing to output collection? If not is there a better way to do this?
Your lock will work correctly but it will not be very efficient because all your worker threads will be forced to pause for the entire duration of each save operation. Also, locks tends to be (relatively) expensive, so performing a lock in each iteration of each thread is a bit wasteful.
One of your comments mentioned giving each worker thread its own data storage: yes, you can do this. Here's an example that you could tailor to your needs:
Parallel.ForEach(
// collection of objects to iterate over
inputrecords,
// delegate to initialize thread-local data
() => new List<OutRecord>(),
// body of loop
(inputrecord, loopstate, localstorage) =>
{
localstorage.Add(CreateOutputRecord(inputrecord));
if (localstorage.Count > 1000)
{
// Save() must be thread-safe, or you'll need to wrap it in a lock
Save(localstorage);
localstorage.Clear();
}
return localstorage;
},
// finally block gets executed after each thread exits
localstorage =>
{
if (localstorage.Count > 0)
{
// Save() must be thread-safe, or you'll need to wrap it in a lock
Save(localstorage);
localstorage.Clear();
}
});
One approach is to define an abstraction that represents the destination for your data. It could be something like this:
public interface IRecordWriter<T> // perhaps come up with a better name.
{
void WriteRecord(T record);
void Flush();
}
Your class that processes the records in parallel doesn't need to worry about how those records are handled or what happens when there's too many of them. The implementation of IRecordWriter handles all those details, making your other class easier to test.
An implementation of IRecordWriter could look something like this:
public abstract class BufferedRecordWriter<T> : IRecordWriter<T>
{
private readonly ConcurrentQueue<T> _buffer = new ConcurrentQueue<T>();
private readonly int _maxCapacity;
private bool _flushing;
public ConcurrentQueueRecordOutput(int maxCapacity = 100)
{
_maxCapacity = maxCapacity;
}
public void WriteRecord(T record)
{
_buffer.Enqueue(record);
if (_buffer.Count >= _maxCapacity && !_flushing)
Flush();
}
public void Flush()
{
_flushing = true;
try
{
var recordsToWrite = new List<T>();
while (_buffer.TryDequeue(out T dequeued))
{
recordsToWrite.Add(dequeued);
}
if(recordsToWrite.Any())
WriteRecords(recordsToWrite);
}
finally
{
_flushing = false;
}
}
protected abstract void WriteRecords(IEnumerable<T> records);
}
When the buffer reaches the maximum size, all the records in it are sent to WriteRecords. Because _buffer is a ConcurrentQueue it can keep reading records even as they are added.
That Flush method could be anything specific to how you write your records. Instead of this being an abstract class the actual output to a database or file could be yet another dependency that gets injected into this one. You can make decisions like that, refactor, and change your mind because the very first class isn't affected by those changes. All it knows about is the IRecordWriter interface which doesn't change.
You might notice that I haven't made absolutely certain that Flush won't execute concurrently on different threads. I could put more locking around this, but it really doesn't matter. This will avoid most concurrent executions, but it's okay if concurrent executions both read from the ConcurrentQueue.
This is just a rough outline, but it shows how all of the steps become simpler and easier to test if we separate them. One class converts inputs to outputs. Another class buffers the outputs and writes them. That second class can even be split into two - one as a buffer, and another as the "final" writer that sends them to a database or file or some other destination.

Managing and Closing Dynamically created SQL connections in .net

I have a c# windows form application that connects to databases dynamically where each user may connect to different databases.
The current implementation is as follows:
Connection Repository that contains a dynamically populated list of connections (per user).
When a user initiates a request that requires a database connection the respective connection is looked up from the connection repository ,opened , and then used in the user request .
Code Sample from the connection repository
public class RepoItem
{
public string databasename;
public SqlConnection sqlcnn;
}
public class ConnectionRepository
{
private List<RepoItem> connectionrepositroylist;
public SqlConnection getConnection(String dbname)
{
SqlConnection cnn = (from n in connectionrepositroylist
where n.databasename == dbname
select n.sqlcnn).Single;
cnn.Open();
return cnn;
}
}
sorry for any code errors i just improvised a small version of the implementation for demonstration purpose.
I'am not closing connections after a command execution because it may be used by another command simultaneously.
The questions are:
Should i be worried about closing the connections ?
Does connection close automatically if it is idle for a specific period ?
I have a method in mind to implement a timer in the created Connection Repository and check for idle connections through the Executing ConnectionState Enumeration and close them manually.
Any suggestions are welcome .
When i want a specific connection i call the getConnection function in the ConnectionRepository class and pass the database name as a parameter.
PS: I didn't post the complete implemented code because it is quite big and includes the preferences that affect the populating of the connection list.
I would suggest not to return the SQLConnection to the calling method at all.
Instead, create a method that will accept an Action<SqlConnection>, create the connection inside a using block, and execute the action inside that block
This way you know that the connection will always be correctly closed and disposed, while giving the using code the freedom to do whatever it needs:
public class RepoItem
{
public string databasename;
public SqlConnection sqlcnn;
}
public class DatabaseConnector
{
private List<RepoItem> connectionrepositroylist;
private SqlConnection GetConnection(String dbname)
{
return (from n in connectionrepositroylist
where n.databasename == dbname
select n.sqlcnn).SingleOrDefault();
}
public void Execute(String dbname, Action<SqlConnection> action)
{
using (var cnn = GetConnection(dbname))
{
if (cnn != null) // in case dbname is not in the list...
{
cnn.Open();
action(cnn);
}
}
}
}
Then, to execute an sql statement you can do something like this:
public void ExecuteReaderExample(string dbName, string sql)
{
Execute("dbName",
connection =>
{
using (var cmd = new SqlCommand(sql, connection))
{
using (var reader = cmd.ExecuteReader())
{
while (reader.Read())
{
// do stuff with data form the database
}
}
}
});
}
Of course, you can also wrap the SqlCommand in a method like this.
I've been working with this approach for quite some time now, and as far as I can tell it's working well. In fact, It's working so well I've published a project on git hub based on this approach.
It saves you a lot of the plumbing when dealing with ado.net, by wrapping the connection, command, reader and adapter much the same way.
Feel free to download it and adapt to your needs.
P.S.
To answer your questions directly:
Should i be worried about closing the connections ?
Yes, you should.
Does connection close automatically if it is idle for a specific period ?
No, it doesn't.
However, implementing a method like I suggested will handle closing and disposing the connection object for you, so you don't need to worry about it.
Update
As Yahfoufi wrote in his comment, this design has a flaw, since multiple commands are using the same instance of SqlConnection, you are risking closing the connection while other commands are running.
However, fixing this design flaw is very easy - instead of holding SqlConnection in RepoItem you can simply hold the connection string:
public class RepoItem
{
public string DatabaseName {get; set;}
public string ConnectionString {get; set;}
}
Then you change the GetConnection method like this:
private SqlConnection GetConnection(String dbname)
{
return new SqlConnection(from n in connectionrepositroylist
where n.databasename == dbname
select n.sqlcnn).SingleOrDefault());
}
Now each Execute method is working on it's own individual instance of SqlConnection so you don't need to worry about closing in the middle of some other command executing.
However, While we are on the subject of refactoring, I would suggest removing the RepoItem class all together and instead of using a List<RepoItem> to hold the connection strings simply use a Dictionary<string, string>, where the database name is the key and the connection string is the value. This way you can only have one connection string per database name, and your GetConnection method is simplified to this:
private Dictionary<string, string> connectionrepositroylist;
private string GetConnectionString(String dbname)
{
return connectionrepositroylist.ContainsKey(dbname) ? connectionrepositroylist[dbname] : "";
}
So, the complete DatabaseConnector class will look like this:
public class DatabaseConnector
{
private Dictionary<string, string> connectionrepositroylist;
private string GetConnectionString(String dbname)
{
return connectionrepositroylist.ContainsKey(dbname) ? connectionrepositroylist[dbname] : "";
}
public void Execute(String dbname, Action<SqlConnection> action)
{
var connectionString = GetConnectionString(dbname);
if(!string.IsNullOrEmpty(connectionString))
{
using (var cnn = new SqlConnection(connectionString))
{
cnn.Open();
action(cnn);
}
}
}
// Of course, You will need a way to populate your dictionary -
// I suggest having a couple of methods like this to add, update and remove items.
public bool AddOrUpdateDataBaseName(string dbname, string connectionString)
{
if(connectionrepositroylist.ContainsKey(dbname))
{
connectionrepositroylist[dbname] = connectionString;
}
else
{
connectionrepositroylist.Add(dbname, connectionString);
}
}
}
The good news is that ADO.Net manages your connection pools dynamically, so there's minimal overhead in you dynamically opening and closing connections in code. There's a good document here if you want to look through the detail.
To answer the specific questions you've raised:
Should i be worried about closing the connections ?
Yes, but not for the reasons you may think. Microsoft encourage you to close your connections, so as to return them to the pool for (re)use elsewhere in your code. Closing the connection doesn't actually close it - it merely returns the underlying connection to the pool. Failure to close your connections properly can lead to delays in them being returned to the pool, thus adversely affecting your applications performance as more connections need to be added to the pool to cope with demand.
Does connection close automatically if it is idle for a specific
period ?
A connection is only returned to the pool when it's Dispose or Finalise methods get called. If you create a connection and drop it into a static container then it will not be returned to the pool at all. As such, your ConnectionRepository may actually be harming performance.
I have a method in mind to implement a timer in the created Connection
Repository and check for idle connections
This is unnecessary - close your connections to allow them to return to the pool. This way they will be available for other threads to use
Personally, I'd suggest that you modify your RepoItem class to store connection strings, rather than connection objects, and let ADO.Net's pooling do all the heavy lifting.
public static class ConnectionRepository
{
private static readonly Dictionary<string, string> Connections = new Dictionary<string, string>(StringComparer.CurrentCultureIgnoreCase);
public static bool Contains(string key)
{
return Connections.ContainsKey(key);
}
public static void Add(string key, string connectionString)
{
Connections.Add(key, connectionString);
}
public static SqlConnection Get(string key)
{
var con = new SqlConnection(Connections[key]);
con.Open();
return con;
}
}
With this in place, you can query the database as follows:
public static void foo()
{
using (var con = ConnectionRepository.Get("MyConnection"))
using (var cmd = new SqlCommand("SELECT * FROM MyTable", con))
{
var dr = cmd.ExecuteReader();
//...
}
}
Once the query has executed and the connection is no longer required, the using() block calls its Dispose() method and releases the underlying connection back to the pool for re-use.
As #tinudu says, the SqlConnection class reuses existing connections automatically - you don't have to implement that yourself. See SQL Server Connection Pooling.
If you create the SqlConnection object in a using statement, C# will close the connection automatically as required.
Wrapping the whole thing (create connection, open, run query, close connection) in a function is the best idea. You can put the function in a repository base-class, so it is available to all your repositories.
You would need several functions for the different types of SQL query (select, update, stored proc) but you only need to write one of each - they will get reused.
if you worried about so many conditions let say parallel execution as
well so consider reserving connections for app and close all at once
when app is closing.
public class RepoItem
{
public string databasename;
public SqlConnection sqlcnn;
}
public class ConnectionRepository
{
private List<RepoItem> connectionrepositroylist;
public SqlConnection getConnection(String dbname)
{
SqlConnection cnn = (from n in connectionrepositroylist
where n.databasename == dbname
select n.sqlcnn).Single;
if (cnn!= null && cnn.State == cnn.Closed) // Impelement other checks as well
{
cnn.Open();
}
return cnn;
}
}
Implement CloseConnections and call while application closing i.e
Application.ApplicationExit event
public void CloseConnections()
{
foreach (var connection in connectionrepositroylist)
{
try
{
if (connection.State == System.Data.ConnectionState.Open) // check other conditions
{
connection.Close();
}
}
catch (Exception)
{
//logging or special handling
}
}
}
Points to be note
If some query is still executing and user tries to shut down or close the
app can consider following implementations
Wont allow the application shutdown . Callback delegate will help in this
case to ensure that query is returned.
Force stop and close the connection
It is a better Practice to close the sqlconnection manually, Since it can release the connection which can be used for other processes. Also note that you should open the connection as much as late you can and close it early as possible.

Using SQL Server application locks to solve locking requirements

I have a large application based on Dynamics CRM 2011 that in various places has code that must query for a record based upon some criteria and create it if it doesn't exist else update it.
An example of the kind of thing I am talking about would be similar to this:
stk_balance record = context.stk_balanceSet.FirstOrDefault(x => x.stk_key == id);
if(record == null)
{
record = new stk_balance();
record.Id = Guid.NewGuid();
record.stk_value = 100;
context.AddObject(record);
}
else
{
record.stk_value += 100;
context.UpdateObject(record);
}
context.SaveChanges();
In terms of CRM 2011 implementation (although not strictly relevant to this question) the code could be triggered from synchronous or asynchronous plugins. The issue is that the code is not thread safe, between checking if the record exists and creating it if it doesn't, another thread could come in and do the same thing first resulting in duplicate records.
Normal locking methods are not reliable due to the architecture of the system, various services using multiple threads could all be using the same code, and these multiple services are also load balanced across multiple machines.
In trying to find a solution to this problem that doesn't add massive amounts of extra complexity and doesn't compromise the idea of not having a single point of failure or a single point where a bottleneck could occur I came across the idea of using SQL Server application locks.
I came up with the following class:
public class SQLLock : IDisposable
{
//Lock constants
private const string _lockMode = "Exclusive";
private const string _lockOwner = "Transaction";
private const string _lockDbPrincipal = "public";
//Variable for storing the connection passed to the constructor
private SqlConnection _connection;
//Variable for storing the name of the Application Lock created in SQL
private string _lockName;
//Variable for storing the timeout value of the lock
private int _lockTimeout;
//Variable for storing the SQL Transaction containing the lock
private SqlTransaction _transaction;
//Variable for storing if the lock was created ok
private bool _lockCreated = false;
public SQLLock (string lockName, int lockTimeout = 180000)
{
_connection = Connection.GetMasterDbConnection();
_lockName = lockName;
_lockTimeout = lockTimeout;
//Create the Application Lock
CreateLock();
}
public void Dispose()
{
//Release the Application Lock if it was created
if (_lockCreated)
{
ReleaseLock();
}
_connection.Close();
_connection.Dispose();
}
private void CreateLock()
{
_transaction = _connection.BeginTransaction();
using (SqlCommand createCmd = _connection.CreateCommand())
{
createCmd.Transaction = _transaction;
createCmd.CommandType = System.Data.CommandType.Text;
StringBuilder sbCreateCommand = new StringBuilder();
sbCreateCommand.AppendLine("DECLARE #res INT");
sbCreateCommand.AppendLine("EXEC #res = sp_getapplock");
sbCreateCommand.Append("#Resource = '").Append(_lockName).AppendLine("',");
sbCreateCommand.Append("#LockMode = '").Append(_lockMode).AppendLine("',");
sbCreateCommand.Append("#LockOwner = '").Append(_lockOwner).AppendLine("',");
sbCreateCommand.Append("#LockTimeout = ").Append(_lockTimeout).AppendLine(",");
sbCreateCommand.Append("#DbPrincipal = '").Append(_lockDbPrincipal).AppendLine("'");
sbCreateCommand.AppendLine("IF #res NOT IN (0, 1)");
sbCreateCommand.AppendLine("BEGIN");
sbCreateCommand.AppendLine("RAISERROR ( 'Unable to acquire Lock', 16, 1 )");
sbCreateCommand.AppendLine("END");
createCmd.CommandText = sbCreateCommand.ToString();
try
{
createCmd.ExecuteNonQuery();
_lockCreated = true;
}
catch (Exception ex)
{
_transaction.Rollback();
throw new Exception(string.Format("Unable to get SQL Application Lock on '{0}'", _lockName), ex);
}
}
}
private void ReleaseLock()
{
using (SqlCommand releaseCmd = _connection.CreateCommand())
{
releaseCmd.Transaction = _transaction;
releaseCmd.CommandType = System.Data.CommandType.StoredProcedure;
releaseCmd.CommandText = "sp_releaseapplock";
releaseCmd.Parameters.AddWithValue("#Resource", _lockName);
releaseCmd.Parameters.AddWithValue("#LockOwner", _lockOwner);
releaseCmd.Parameters.AddWithValue("#DbPrincipal", _lockDbPrincipal);
try
{
releaseCmd.ExecuteNonQuery();
}
catch {}
}
_transaction.Commit();
}
}
I would use this in my code to create a SQL Server application lock using the unique key I am querying for as the lock name like this
using (var sqlLock = new SQLLock(id))
{
//Code to check for and create or update record here
}
Now this approach seems to work, however I am by no means any kind of SQL Server expert and am wary about putting this anywhere near production code.
My question really has 3 parts
1. Is this a really bad idea because of something I haven't considered?
Are SQL Server application locks completely unsuitable for this purpose?
Is there a maximum number of application locks (with different names) you can have at a time?
Are there performance considerations if a potentially large number of locks are created?
What else could be an issue with the general approach?
2. Is the solution actually implemented above any good?
If SQL Server application locks are usable like this, have I actually used them properly?
Is there a better way of using SQL Server to achieve the same result?
In the code above I am getting a connection to the Master database and creating the locks in there. Does that potentially cause other issues? Should I create the locks in a different database?
3. Is there a completely alternative approach that could be used that doesn't use SQL Server application locks?
I can't use stored procedures to create and update the record (unsupported in CRM 2011).
I don't want to add a single point of failure.
You can do this much easier.
//make sure your plugin runs within a transaction, this is the case for stage 20 and 40
//you can check this with IExecutionContext.IsInTransaction
//works not with offline plugins but works within CRM Online (Cloud) and its fully supported
//also works on transaction rollback
var lockUpdateEntity = new dummy_lock_entity(); //simple technical entity with as many rows as different lock barriers you need
lockUpdateEntity.Id = Guid.parse("well known guid"); //well known guid for this barrier
lockUpdateEntity.dummy_field=Guid.NewGuid(); //just update/change a field to create a lock, no matter of its content
//--------------- this is untested by me, i use the next one
context.UpdateObject(lockUpdateEntity);
context.SaveChanges();
//---------------
//OR
//--------------- i use this one, but you need a reference to your OrganizationService
OrganizationService.Update(lockUpdateEntity);
//---------------
//threads wait here if they have no lock for dummy_lock_entity with "well known guid"
stk_balance record = context.stk_balanceSet.FirstOrDefault(x => x.stk_key == id);
if(record == null)
{
record = new stk_balance();
//record.Id = Guid.NewGuid(); //not needed
record.stk_value = 100;
context.AddObject(record);
}
else
{
record.stk_value += 100;
context.UpdateObject(record);
}
context.SaveChanges();
//let the pipeline flow and the transaction complete ...
For more background info refer to http://www.crmsoftwareblog.com/2012/01/implementing-robust-microsoft-dynamics-crm-2011-auto-numbering-using-transactions/

How does a Linq-to-SQL DataClasses connection get closed?

I have a Linq-to-SQL DataClasses object that I utilize to make database calls. I've wrapped it like so:
public class DataWrapper {
private DataClassesDataContext _connection = null;
private static DataWrapper _instance = null;
private const string PROD_CONN_STR = "Data Source=proddb;Initial Catalog=AppName;User ID=sa;Password=pass; MultipleActiveResultSets=true;Asynchronous Processing=True";
public static DataClassesDataContext Connection {
get {
if (Instance._connection == null)
Instance._connection = new DataClassesDataContext(DEV_CONN_STR);
return Instance._connection;
}
}
private static DataWrapper Instance {
get {
if (_instance == null) {
_instance = new DataWrapper();
}
return _instance;
}
}
}
I have a couple of threads using this wrapper to make stored procedure calls, like this:
DataWrapper.Connection.Remove_Message(completeMessage.ID);
On very rare occasions, my DataClasses object will throw the exception:
ExecuteNonQuery requires an open and available Connection. The
connection's current state is closed.
I'm not managing the connection's state in any way -- I figured Linq-to-SQL should handle this. I could check the connection state of the Connection each time I make a call and open it if it has been closed but that seems like a hack.
I've tried putting MultipleActiveResultSets=true and Asynchronous Processing=True on the connection string to try to handle the possibility of SQL forcibly closing connections, but that hasn't seemed to help.
Any ideas?
You should not cache and re-use a DB connection object ... especially from multiple threads.
You should open a connection, execute your operation(s) and close your connection each time you need to access the DB.
The underlying database access infrastructure (ASP.NET/OLEDB) will manage connection pooling in such a way that reduces most re-connection costs to (effectively) zero.

Should I keep an instance of DbContext in a separate thread that performs periodic job

I have a class Worker which sends emails periodically,I start in Global.asax.cs on App_start()
public static class Worker
{
public static void Start()
{
ThreadPool.QueueUserWorkItem(o => Work());
}
public static void Work()
{
var r = new DbContext();
var m = new MailSender(new SmtpServerConfig());
while (true)
{
Thread.Sleep(600000);
try
{
var d = DateTime.Now.AddMinutes(-10);
var ns = r.Set<Notification>().Where(o => o.SendEmail && !o.IsRead && o.Date < d);
foreach (var n in ns)
{
m.SendEmailAsync("noreply#example.com", n.Email, NotifyMailTitle(n) + " - forums", NotifyMailBody(n));
n.SendEmail = false;
}
r.SaveChanges();
}
catch (Exception ex)
{
ex.Raize();
}
}
}
}
So I keep this dbcontext alive for the entire lifetime of the application is this a good practice ?
DbContext is a very light-weight object.
It doesn't matter whether your DbContext stays alive or you instantiate it just before making the call because the actual DB Connection only opens when you SubmitChanges or Enumerate the query (in that case it is closed on end of enumeration).
In your specific case. It doesn't matter at all.
Read Linq DataContext and Dispose for details on this.
I would wrap it in a using statement inside of Work and let the database connection pool do it's thing:
using (DbContext r = new DbContext())
{
//working
}
NOTE: I am not 100% sure how DbContext handles the db connections, I am assuming it opens one.
It is not good practice to keep a database connection 'alive' for the lifetime of an application. You should use a connection when needed and close it via the API(using statement will take care of that for you). The database connection pool will actually open and close connections based on connection demands.
I agree with #rick schott that you should instantiate the DbContext when you need to use it rather than keep it around for the lifetime of the application. For more information, see Working with Objects (Entity Framework 4.1), especially the section on Lifetime:
When working with long-running context consider the following:
As you load more objects and their references into memory, the
memory consumption of the context may increase rapidly. This may cause
performance issues.
If an exception causes the context to be in an unrecoverable state,
the whole application may terminate.

Categories

Resources