In a C# 2008 windows application that use call a web service, there is a large volume of statements that look like the following:
In a C# 2008 application, I am using linq to sql statements that look like the following:
//Code
TDataContext TData = new TDataContext();
var TNumber = (from dw in cms_TData.people
where dw.Organization_Name.ToUpper().Trim() == strOrgnizationName.Trim().
Right before every call that is made to the database, a new data context object is created.
Would this cause some kind of connection pooling problem to the database? If so, can you tell me how to resolve the connection pooling problem?
Connection pooling is not a problem, it is a solution to a problem. It is connection pooling that enables you write
TDataContext TData = new TDataContext();
without a fear of exhausting the limited number of RDBMS connections, or slowing down your system to a crawl due to closing and re-opening connections too often. The only issue that you may run into with the code like that is caching: whatever is cached in TData is gone when it goes out of scope, so you may re-read the same info multiple times unnecessarily. However, the cache on RDBMS side would help you in most cases, so even the caching is not going to be an issue most of the time.
A DataContext is a lightweight object which closes the database connection as soon as it has completed as a task.
Consequently, creating a large number of these objects shouldn't cause a connection pooling problem unless, possibly, they are being created simultaneously on different threads.
Related
I have developed a Windows service that uses database connections.
I have created the following field:
private MyDBEntities _db;
and in OnStart I have:
_db = new MyDBEntities();
Then the service does its work.
In OnStop method I have:
_db.Dispose();
_db = null;
Is there a disadvantage with this approach? For performance reasons, I need the database (which is SQL Server) to be opened all the time, while the service is running.
Thanks
Jaime
If your service is the only app that accesses this database, it shouldn't have any performance decrease. However, in my opinion, it is not the best approach to have a long-lived connection to the database. Imagine a situation where you don't keep your database on your server, but you use some cloud provider (Google, AWS, Azure). With cloud solutions, the address of your server may not be fixed, and it may vary over time. It may happen that IP address will change during the execution of one query (most likely, you'll get SqlTransientException or similar, I don't remember).
If your service will be the only one app that accesses the database and you will have only the one instance of it - then this approach might be beneficial in terms of performance - as you don't have to open and close connection always. However, you have to remember that with this approach, many other issues may appear (you may have to reconnect from stale connection, connect to other replica instances, or destroy existing connection because of something I don't think about at the moment). Moreover, remember about multithreading issues that most likely will arise with this approach if you won't develop it correctly.
IMHO - you should open a connection to the database always when it is needed, and close just after using it. You'll avoid most of the issues I've mentioned earlier.
Having a Singleton context will cause threads to lock on SaveChanges() (slowing performance).
Also each event (which i presume run asynchronously) could possibly save some other event information causing unexpected behavior.
As someone already pointed out you can use connection pooling to avoid connection issue and dispose the context on each request/event fired.
I've been working on a web application in ASP.net. My application has several pages and all of them need to display tables that are populated by a database. Right now what I'm doing is, on each page, I'm opening a database connection, executing the query specific to that page, and closing the db connection. So this happens each time the user clicks a link to go to a new page or clicks a form control like the grid page.
I was wondering if this was a disaster from the performance point of view. Is there any better way to do this?
Almost universally, database connections should be handled as follows: Open as late as possible, and close as soon as possible. Open and close for multiple queries/updates... don't think leaving it open saves you anything. Because connection pooling generally does a very good job for you of managing the connections.
It is perfectly fine to have a couple/few connections opened/closed in the production of a single page. Trying to keep a single connection open between page views would be quite bad... don't do that under any circumstances.
Basically, with connection pooling (enabled by default for almost all providers), "closing" a connection actually just releases it back to the pool to be reused. Trying to keep it open yourself will tie up valuable connections.
That is exactly how you want it to be. A database connection should only be opened when necessary and closed immediately after use.
What you may want to look at, especially if performance is a big issue for you, is caching. You may want to cache the entire page, or just parts of a page, or just the data that you would like displayed on your pages. You will save a lot of database trips this way, but you would have to now consider other things like when to update your cache, caching for different users, etc.
From MSDN - Best Practices in ADO.Net
High performance applications keep connections to the data source in
use for a minimal amount of time, as well as take advantage of
performance enhancing technology such as connection pooling.
What you are doing is perfectly fine, opening the connection to execute the query and then closing it afterward. If you hold the connection for a longer period of time, and there are multiple people accessing your application, then there are chances that you might run out the connection limit usually set on a database.
Tieing up DB connections to backend code is a bad practice.As you are learning, I suggest you to use webservices to interact with UI rather than linking your Data Interactions to UI.
Like UI(Aspx Page ) >> BLL (Business Logic Layer)>> DAL (Data access Layer)
Also try using the 'using' keyword in DAL and dispose connections and all after DB interaction
I am working in a small team of about a dozen developers on a project being written in C# WPF as the infrastructure/dba. As I run traces against the SQL Server to see how performance is going, what I am seeing is a constant:
open connection
run some statement
close connection
exec sp_reset_connection
open connection
run some statement
close connection
exec sp_reset_connection
open connection
run some statement
close connection
exec sp_reset_connection
And so on and on and on. I have spoke with the devs about this and some have mentioned possible situations where a foreach loop may be encompassing a using statement so a foreach through a datatable would open and close connections throughout the contents of the datatable.
Question: Is getting better control of the constant opening and closing of connections a worthy goal or are connections really that cheap? My logic is that while opening and closing a connection may relatively be cheap, nothing is cheap when done in sufficiently large numbers.
Details:
.Net Framework 4.5.1
SQL Server 2014
Entity Framework 6
If you use entity framework, you should create the context just before you need it and dispose it as soon as possible:
using (var someContext = new SomeContext())
{
}
The reason is to avoid memory building up and to avoid thread-safety issues.
Of course, don't do this in a loop - this is at the level of a request.
Opening and closing connections to a database are relatively expensive, as can be read in detail here: Performance Considerations (Entity Framework), but I think the same concept mostly applies without EF. During loops, it's usually not recommended to open and close the connection every time, but instead open it, process all rows and close the connection.
The answer would be to let the using encompass the loop, instead of the other way around. If performance is relevant (it almost always is), it definately pays to put effort into efficiƫnt data access, especially early in the development process.
If performance is an issue, but you don't want to refactor code you should consider setting a ConnectionPooling = true in the connections string.
Connection pooling allows one to keep physical connection, which is generally expensive to setup, while disposing logical connection.
I'm modifying a Winforms app to use connection pooling so data access can occur in background threads. The business logic is implemented in PL/SQL and there are a couple of security related stored procedures that have to be called in order to make use of the business logic.
What I need is a way to tell if the connection has been used without a round-trip to the database. I don't think I can keep track of them in a HashSet because I doubt Equals or even ReferenceEquals could be relied upon. Any ideas?
EDIT:
Just to be clear, I plan to use ODP.NET's built-in connection pooling mechanism. If I rolled my own connection pool, keeping track of which connections were new vs. used would be extremely trivial.
The connection pooling provided by ODP.NET is completely opaque. That is, it isn't leaky in the way I'd like it to be - there is no way of knowing if a connection has been used before or is brand new. However it is a leaky abstraction in another way: Any session state (e.g. package scoped variables, which are session scoped) is preserved between usages of the connection. Since this is a question about determining the used vs. new state of a connection without going to the database, the answer is that it simply cannot be done using ODP.NET's built-in connection pool.
That leaves two options:
Create a connection pool implementation that either provides that information or performs user-defined initialisation upon creation of each new connection; or
Perform a round-trip to the database to determine the used vs. new state of the connection.
ADO.NET manages a connection pool for you. It's even configurable. Why would you ever try to track these connections yourself?
http://msdn.microsoft.com/en-us/library/bb399543.aspx
And, specifically for Oracle:
http://msdn.microsoft.com/en-us/library/ms254502.aspx
The .NET Framework Data Provider for Oracle provides connection
pooling automatically for your ADO.NET client application. You can
also supply several connection string modifiers to control connection
pooling behavior (see "Controlling Connection Pooling with Connection
String Keywords," later in this topic).
Pool Creation and Assignment
When a connection is opened, a connection pool is created based on an
exact matching algorithm that associates the pool with the connection
string in the connection. Each connection pool is associated with a
distinct connection string. When a new connection is opened, if the
connection string is not an exact match to an existing pool, a new
pool is created.
Once created, connection pools are not destroyed until the active
process ends. Maintaining inactive or empty pools uses very few system
resources.
BTW, I guess I'm not totally hip on all the OracleClient changes that have been going on. It seems like Microsoft may be dropping support? Last I knew ODP.NET was based on ADO.NET... but, even if I'm mistaken about that, ODB.NET claims to support connection pooling out of the box as well:
http://download.oracle.com/docs/html/E10927_01/featConnecting.htm#CJAFIDDC
If what you need is to just know whether you ever had some connections not come from pool but a fresh new one, I think that you can use the HardConnectsPerSecond and SoftconnectsPerSecond performance counter provided by ODP.NET.
This won't tell you exactly which OracleConnection.Open() leads to a hard connection, though. I was also thinking about combining other ODP.NET perf counter to determine if a new hard connection is created, but after some experiments this is not easy because ODP.NET will also purge connections every three minutes (depending on the Decr Pool Size setting).
This is not a question about optimizing a SQL command. I'm wondering what ways there are to ensure that a SQL connection is kept open and ready to handle a command as efficiently as possible.
What I'm seeing right now is I can execute a SQL command and that command will take ~1s, additional executions will take ~300ms. This is after the command has previously been executed against the SQL server (from another application instance)... so the SQL cache should be fully populated for the executed query prior to this applications initial execution. As long as I continually re-execute the query I see times of about 300ms, but if I leave the application idle for 5-10 minutes and return the next request will be back to ~1s (same as the initial request).
Is there a way to via the connection string or some property on the SqlConnection direct the framework to keep the connection hydrated and ready to efficiently handle queries?
Have you checked the execution plan for your procedures. Execution plans I believe are loaded into memory on the Server and then get cleared after certain periods of time or depending on what tables etc are accessed in the procedures. We've had cases where simplifying stored procedures (perhaps splitting them) reduces the amount of work the database server has to do in calculating the plans...and ultimately reduces the first time the procedure is called...You can issue commands to force stored procedures to recompile each time for testing whether you are reducing the initial call time...
We've had cases where the complexity of a stored procedure made the database server continually have to recompile based on different parameters which drastically slowed it down, splitting the SP or simplifying large select statements into multiple update statements etc helped a considerable amount.
other ideas are perhaps intermittently calling a simple getDate() or similar every so often so that the sql server is awake (hope that makes sense)...much the same as keeping an asp.net app in memory in IIS.
The default value for open connections in a .NET connection pool is zero.
You can adjust this value in your connection string to 1 or more:
"data source=dbserver;...Asynchronous Processing=true;Min Pool Size=1"
See more about these options in MSDN.
you keep it open by not closing it. :) but that's not adviseable since connection pooling will handle connection management for you. do you have it enabled?
by default the connection pooling is enabled in ADO .NET. this will be through the connection string used by the application. More info in Using Connection Pooling with SQL Server
If you use more than one database connection, it may be more efficent. Having one database connection means the best possible access speed is always going to be limited sequentially. Whereas having >1 connections means theres an opportunity there for your compiler to optimize concurrent access a little more. I guess you're using .NET?
Also if your issuing the same SQL statement repeatedly, its possible your database server is caching the result for a short period of time, therefore making the return of the resultset quicker..