I have developed a Windows service that uses database connections.
I have created the following field:
private MyDBEntities _db;
and in OnStart I have:
_db = new MyDBEntities();
Then the service does its work.
In OnStop method I have:
_db.Dispose();
_db = null;
Is there a disadvantage with this approach? For performance reasons, I need the database (which is SQL Server) to be opened all the time, while the service is running.
Thanks
Jaime
If your service is the only app that accesses this database, it shouldn't have any performance decrease. However, in my opinion, it is not the best approach to have a long-lived connection to the database. Imagine a situation where you don't keep your database on your server, but you use some cloud provider (Google, AWS, Azure). With cloud solutions, the address of your server may not be fixed, and it may vary over time. It may happen that IP address will change during the execution of one query (most likely, you'll get SqlTransientException or similar, I don't remember).
If your service will be the only one app that accesses the database and you will have only the one instance of it - then this approach might be beneficial in terms of performance - as you don't have to open and close connection always. However, you have to remember that with this approach, many other issues may appear (you may have to reconnect from stale connection, connect to other replica instances, or destroy existing connection because of something I don't think about at the moment). Moreover, remember about multithreading issues that most likely will arise with this approach if you won't develop it correctly.
IMHO - you should open a connection to the database always when it is needed, and close just after using it. You'll avoid most of the issues I've mentioned earlier.
Having a Singleton context will cause threads to lock on SaveChanges() (slowing performance).
Also each event (which i presume run asynchronously) could possibly save some other event information causing unexpected behavior.
As someone already pointed out you can use connection pooling to avoid connection issue and dispose the context on each request/event fired.
Related
I need to create a tool or some observation mechanism to report me how many redis connections I have going on. We're having problems with this and we're only getting actual data from production environment (azure), and when it's there, it's kinda too-late...
So, in a local machine (where every developer has a redis installed for testing reasons), how can I know how many opened connections I have at a given moment? The ideal number would be zero, cause you open it, get/set whatever, close... right?
Run CLIENT LIST, or INFO against your Redis instance to find out who's connected at any given moment.
The ideal number would be zero, cause you open it, get/set whatever, close... right?
Actually, not necessarily - some clients offer the possibility of keeping connections open for pooling purposes.
Use a class factory to create your redis connections, open them and lease them out to the consumer classes. The consumer classes return them to the factory for reuse or closure.
I want to implement the following interface on a 2-Tier application with an MS SQL-Server 2008R2 (i.e. no app server in between)
interface ILicense {
void Acquire(string license);
void Release(string license);
}
However, I want to release the license even if the application is killed or bombs out without calling the Release method. I also want to avoid using a timer which refreshes the license every minute or so.
So I thought: Use a dedicated SqlConnection together with the sp_getapplock and sp_releaseapplock SP because that's what they are seemed to be made for. Now I found out that the SP only work from within a transaction, so I would need to keep the transaction open all the time (i.e. while the application is running). Anyway, it works that way. The application starts, opens the connection, starts the transaction, and locks the license.
When the application terminates, the connection is closed, everything is rolled back and the license is released. Super.
Whenever the running app needs to switch licenses (e.g. for another module), it calls Release on the old license and then Acquire on the new one. Cool.
Now to my question(s):
Is it acceptable to have an open (uncommitted) transaction open on a separate connection for a long time?
Are there any better possibilities to implement such a 'lock' mechanism? The problem is that the license shall be released even if the application terminates unexpectedly. I thought of some sort of 'logout' trigger, but that does not exist in SQL-Server 2008R2
I am by no means the SQL or DB guru that some of the members of this site are but your setup brings up a few concerns or things to consider.
this could really limit the number of concurrent users that your application could have especially in a 2-tier architecture. Now in a 3 tier approach the app server would manage and pool these connections/transactions but then you would lose the ability to use those stored procs to implement your licensing mechanism, i believe.
with the transaction being open for some indeterminate period of time I would worry about the possibility of the tempdb growing too big or exceeding the space allocated to it. i don't know what is going on in the app and if there is anything else going on in that transaction, my guess is no but thought i would mention it.
I hope i am not getting my SQL versions mixed up here but transaction wraparound could cause the db to shutdown.
This limits your app significantly as the data in the transaction has a lock on it that won't be released until you commot or rollback.
There must be a more elegant way to implement a licensing model that doesn't rely on leaving a transaction open for the life of the app or app module. If you have a two tier app then that implies that the client always has some kind of connectivity so maybe generate some kind of unique id for the client and either add a call home method or if you really are set on there being an instantaneous verification then everytime the client performs an action that queries the db have it check to see if the client is properly licensed etc.
Lastly, in all of the SQL teachings I have received from other db guys who actually really know there stuff this kind of setup (long running open transaction) were never recommended unless there was a very specific need that could not be solved otherwise.
we are building a WinForms desktop application which talks to an SQL Server through NHibernate. After extensive research we settled on the Session / Form strategy using Ninject to inject a new ISession into each Form (or the backing controller to be precise). So far it is working decently.
Unfortunately the main Form holds a lot of data (mostly read-only) which gets stale after some time. To prevent this we implemented a background service (really just a seperate class) which polls the DB for changes and issues an event which lets the main form selectively update the changed rows.
This background service also gets a separate session to minimize interference with the other forms. Our understanding was that it is possible to open a transaction per session in parallel as long as they are not nested.
Sadly this doesn't seem to be the case and we either get an ObjectDisposedException in one of the forms or the service (because the service session used an existing transaction from on of the forms and committed it, which fails the commit in the form or the other way round) or we get an InvalidOperationException stating that "Parallel transactions are not supported by SQL Server".
Is there really no way to open more than one transaction in parallel (across separate sessions)?
And alternatively is there a better way to update stale data in a long running form?
Thanks in advance!
I'm pretty sure you have messed something up, and are sharing either session or connection instances in ways you did not intend.
It can depend a bit on which sort of transactions you use:
If you use only NHibernate transactions (session.BeginTransaction()), each session acts independently. Unless you do something special to insert your own underlying database connections (and made an error there), each session will have their own connection and transaction.
If you use TransactionScope from System.Transactions in addition to the NHibernate transactions, you need to be careful about thread handling and the TransactionScopeOption. Otherwise different parts of your code may unexpectedly share the same transaction if a single thread runs through both parts and you haven't used TransactionScopeOption.RequiresNew.
Perhaps you are not properly disposing your transactions (and sessions)?
My DBA says that there are way too many connection open and he thinks it is my code in .net that is leaving them open.
I am using LINQ querys and EF code first.
Example Method:
public List<Stuff> GetStuff()
{
var db = new DBContext();
var results = db.stuff.toList();
return results;
}
Do I need to dispose the db var once I am done? My understanding is that I didn't need to in EF and LINQ. Please point me to a Microsoft documentation about managing connection in code or best practices for LINQ/EF and db connections
Update:
I added
db.Connection.Close();
db.Dispose();
and I still see the open connection in SQL after the two lines were executed. Is there a reason why it wouldn't close when I force it to close?
You should listen to your DBA! Yes, use a using. Do not leave connections open unnecessarily. You should connect, do your business with the db, and close that connection, freeing it up for another process. This is especially true in high volume systems.
Edit. Let me further explain with my own experiences here. In low volume processing, it probably isn't an issue, but it's a bad habit not to dispose of something explicitly or not-wrap it in a using when it clearly implements IDisposable.
In high-volume situations, this is just asking for disaster. Sql server will allot so many connections per application (can be specified in the connection string). What happens is processes will spend time waiting for connections to free up if they're not promptly closed. This generally leads to timeouts or deadlocks in some situations.
Sure, you can tweak Sql server connection mgmt and such, but everytime you tweak a setting, you're making a compromise. You must consider backups running, other jobs running, etc. This is why a wise developer will listen to their DBA's warnings. It's not always all about the code...
I just asked this same question over on Programmers.SE. Robert Harvey gave a great answer.
In general, you don't need to use Using statements with Entity Framework data contexts. Lazy collections is one of the reasons why.
I encourage you to read the entire answer on Programmers.SE as well as the links Robert provides in the answer.
The entity framework uses, as far as i know, connection pooling by default to reduce the overhead of creating new connections everytime.
Are the connections closed when you close your application?
If so, you could try to decrease the Max Pool Size in your connection string or disable connection pooling entirely.
See here for a reference of possible options in your connection string.
By default DbContext automatically manages the connection for you. So you shouldn't have to explicitly call Dispose.
Blog post on the subject: Link
But I believe not disposing can cause performance issues if you're processing a lot of requests. You should add a using statement to see whether or not it's causing a problem in your case.
Yes, if your method defines a Unit of Work; no, if something more primitive. (P.S. something somewhere in your code should define a Unit of Work, and that thing should be wrapped in a using (var context = new DbContext()) {} or equivalent.)
And if you belong to the school of thought that your DbContext is your Unit of Work, then you'll always be wrapping that bad boy with a using block: the local caching of data previously fetched during the context lifetime together with the SaveChanges method act as a sort of lightweight transaction, and your Dispose (without calling SaveChanges) is your Rollback (whereas your SaveChanges is your Commit).
Check this out, here's a standard protocol on how to use IDisposable objects.
https://msdn.microsoft.com/en-us/library/yh598w02.aspx
It says:
"As a rule, when you use an IDisposable object, you should declare and instantiate it in a using statement."
As they have access to unmanaged resources, you should always consider a "using" statement.
In a C# 2008 windows application that use call a web service, there is a large volume of statements that look like the following:
In a C# 2008 application, I am using linq to sql statements that look like the following:
//Code
TDataContext TData = new TDataContext();
var TNumber = (from dw in cms_TData.people
where dw.Organization_Name.ToUpper().Trim() == strOrgnizationName.Trim().
Right before every call that is made to the database, a new data context object is created.
Would this cause some kind of connection pooling problem to the database? If so, can you tell me how to resolve the connection pooling problem?
Connection pooling is not a problem, it is a solution to a problem. It is connection pooling that enables you write
TDataContext TData = new TDataContext();
without a fear of exhausting the limited number of RDBMS connections, or slowing down your system to a crawl due to closing and re-opening connections too often. The only issue that you may run into with the code like that is caching: whatever is cached in TData is gone when it goes out of scope, so you may re-read the same info multiple times unnecessarily. However, the cache on RDBMS side would help you in most cases, so even the caching is not going to be an issue most of the time.
A DataContext is a lightweight object which closes the database connection as soon as it has completed as a task.
Consequently, creating a large number of these objects shouldn't cause a connection pooling problem unless, possibly, they are being created simultaneously on different threads.