I have written many desktop applications and all have gone great using a mysql connection to a database and used sql to query the database. I now want to start a larger project and it feels "wrong" to make many many database connections when I could connect to the server as client - server relationship and just query without having to keep opening and closing connections.
I have done a fair bit of digging around on google but to o avail. I think it's a case of I know what want to search, but not what to search for.
Any gentle nudge in the right direction would be greatly appreciated!
Generally it is accepted best practice to open a database connection, perform some actions and then close the connection.
It is best not to worry about the efficiencies of using lots of connections in this fashion. SqlServer deals with this quite nicely using connection pooling http://msdn.microsoft.com/en-us/library/8xx3tyca(v=vs.110).aspx
If you decide to keep connections open throughout the use of your application you run the risk of having lots of idle connections sitting around which is more "wrong" than opening and closing.
Obviously there are exceptions to these rules (such as if you find yourself opening 100s of connections in very short succession)... but this is general advice.
Related
I've been working on a web application in ASP.net. My application has several pages and all of them need to display tables that are populated by a database. Right now what I'm doing is, on each page, I'm opening a database connection, executing the query specific to that page, and closing the db connection. So this happens each time the user clicks a link to go to a new page or clicks a form control like the grid page.
I was wondering if this was a disaster from the performance point of view. Is there any better way to do this?
Almost universally, database connections should be handled as follows: Open as late as possible, and close as soon as possible. Open and close for multiple queries/updates... don't think leaving it open saves you anything. Because connection pooling generally does a very good job for you of managing the connections.
It is perfectly fine to have a couple/few connections opened/closed in the production of a single page. Trying to keep a single connection open between page views would be quite bad... don't do that under any circumstances.
Basically, with connection pooling (enabled by default for almost all providers), "closing" a connection actually just releases it back to the pool to be reused. Trying to keep it open yourself will tie up valuable connections.
That is exactly how you want it to be. A database connection should only be opened when necessary and closed immediately after use.
What you may want to look at, especially if performance is a big issue for you, is caching. You may want to cache the entire page, or just parts of a page, or just the data that you would like displayed on your pages. You will save a lot of database trips this way, but you would have to now consider other things like when to update your cache, caching for different users, etc.
From MSDN - Best Practices in ADO.Net
High performance applications keep connections to the data source in
use for a minimal amount of time, as well as take advantage of
performance enhancing technology such as connection pooling.
What you are doing is perfectly fine, opening the connection to execute the query and then closing it afterward. If you hold the connection for a longer period of time, and there are multiple people accessing your application, then there are chances that you might run out the connection limit usually set on a database.
Tieing up DB connections to backend code is a bad practice.As you are learning, I suggest you to use webservices to interact with UI rather than linking your Data Interactions to UI.
Like UI(Aspx Page ) >> BLL (Business Logic Layer)>> DAL (Data access Layer)
Also try using the 'using' keyword in DAL and dispose connections and all after DB interaction
i have a question about closing connection in C#. Company has an application where data flows automatically online from app to DB. I would like to create my own ASP + C# application which will use select from data (DB table which is filled from company app) as source for independent report. My question: can closing of the connection in my app have influence on the second(company, very important app?) - record will miss in db due to close connection? or any other problems?
No, everything will be safe if you close it properly. I recommend you to use using construction always. It will be transformed into try-catch-finally and close resources automatically.
That totally depends on your use-case, if you open and leave open hundreds and hundreds if not thousands and thousands of empty connections, the SQL Server will slowly begin to have some performance degradation.
Think of it as you asking your boss for something, and you say, "Boss-man, I need to ask you a question." But you remind him hundreds and thousands of times a second, "I need to ask you a question." Anything else he tries to do will slowly begin to lose performance, as he has to process the fact that you are going to have to ask him a question. Similarly with Sql Server. Mind you, at this point you haven't actually asked your question yet.
If your DBMS is Microsoft SQL Server, see this article: https://msdn.microsoft.com/en-us/library/ms187030.aspx
SQL Server allows a maximum of 32,767 user connections.
If you open 32k connections to the server, two things will likely happen:
Your DBA will come to you and say "wtf mate?" by the time you get close. A likely argument will ensue in which case you and the DBA will probably end up yelling and creating a scene.
Your DBMS will reach the maximum connection limit and the other all will crap out.
Not saying that any of this will happen, that requires you to open 32,767 concurrent connections, but it just goes to further prove that you should open/close as required. Also, if your Application uses a pool of connections and you open n connections, and the pool limit (separate from SQL Server - mind you) is n, you just stopped your app from opening more.
Generally speaking, you should open your connections as late as possible, and close them as early as possible.
I'm investigating some performance issues in our product. We have several threads (order of tens) that are each doing some form of polling every couple of seconds, and every time we do that we are opening a connection to the DB, querying, then disposing.
So I'm wondering, would it be more efficient for each thread to keep an open connection to the DB so we don't have to go through the overhead of opening a connection every 2 seconds? Or is it better to stick with the textbook using blocks?
First thing to learn about is Connection pooling. You're already using it, don't change your code.
The question becomes: how many connections to claim in my config file?
And that's easy to change and measure.
As mentioned, connection pooling should take care of it but if you are beating on the database with messaging or something like that to check on the status of things every few seconds then you could be filling up the database pool very quickly. If you are on SQL Server, do an SP_WHO2 on the database in a query window and you'll see a lot of information: number of spids (connections open), blocking, etc.
In general, connection setup and teardown is expensive; doing this multiple times in tens of threads might be crippling; note however that the solution you use might already be pooling connections for you (even if you're not aware of it), so this may not be necessary (see your manuals and configuration).
On the other hand, if you decide to implement some sort of connection pooling by yourself, check that your DB server could handle tens of extra connections (it probably should).
Can any one tell what is the disadvantage of Using
MicrosoftApplicationsDataBlock.dll(SQLHelper class).
How can we sustain the maximum connection requests in a .net application
If we have lakhs of requests at a time then
is it ok to use
MicrosoftApplicationsDataBlock.dll(SQLHelper class).
More "modern" dataaccess libraries are generally preferable, they provide better performance, flexibility and usability. I would generally avoid the old SQLHelper class if possible. :) I worked on an old project where a dependency on the SQLHelper class kept us from upgrading from .NET 1.1 to .NET 4.
For awesome performance, you may want to take a look at Dapper, it's used here at Stackoverflow and is very fast and easy to use.
But if you're looking at 100k simultaneous requests (second, minute, day??) you probably want to avoid the database altogether. Look at caching, either ASP.NETs own built-in or maybe something like the Windows Server AppFabric Cache.
Disadvanatges of SQlHelper -> Dont think there is any . You get a lot off code for free to open and close connection , transaction handling etc... nothing that you cannot write yourself but number of connections that you can send from your app is not a factor of the SQLhelper or any other DbHelper you use. In any scenario you call system.data.sqlclient which is an API to connect and work with sqlserver...
When you launch N connections they all go the SQL Server Scheduler services. If all the CPUs are busy working on available SPIDs(processes) the new ones go in queue. You can see then usign sp_who2 , or select * from sys.sysprocesses.
The waiting SPIDs are offered CPU cycles at intervals (based on some kind of algo that I am not sure of) . This is called SOS Scheduler Yeild where one process yeilds the scheduler to other... Now this will fine till you dont reach maximum concurrent connections that server can hold. For different version of SQL Server (developer/enterprise etc) this is different. When you reach this MAX no of concurrent connections the SQL Server as no more threads left in its thread pool to allow your app to get new connections.. in these scenario you will get SQL.Exception of connection timed out...
Long story short you can open as many connection and keep them opened as long as you want using sqlhelper or traditional connection.open. In good practice you should open a connection , do an atomic transaction , close the connection and dont open too many connections coz your box (sql) will run out of connection handles to provide to your app..
SQL helper is just a helper , best practices of ADO.NET programming still applies no matter your use it or dont..
Is it smart to keep the connection open throughout the entire session?
I made a C# application that connects to a MySql database, the program both reads and writes to it and the application has to be running about 10 hours a day non-stop.
Are there any risk attached to keeping the connection open instead of calling the close() function every time after you've plucked something from the database and opening it again when you need something new?
Leaving a connection open for a while is fine, as long as:
you don't have so many concurrently idle connections that you hit the MySQL connection limit;
you don't leave it open for hours without doing anything. The default MySQL connection wait_timeout is 8 hours; leave a connection inactive for that long and when you next come to use it you'll get a “MySQL server has gone away” error.
Since you're using ADO.NET, you can use ADO.NET's inbuilt connection pooling capabilities. Actually, let me refine that: you must always use ADO.NET's inbuilt connection pooling capabilities. By doing so you will get the .NET runtime to transparently manage your connections for you in the background. It will keep the connections open for a while even if you closed them and reuse them if you open a new connection. This is really fast stuff.
Make sure to mention in your connection string that you want pooled connections as it might not be the default behaviour.
You only need to create connections locally when you need them, since they're pooled in the backrgound so there's no overhead in creating a new connection:
using (var connection = SomeMethodThatCreatesAConnectionObject())
{
// do your stuff here
connection.Close(); // this is not necessary as
// Dispose() closes it anyway
// but still nice to do.
}
That's how you're supposed to do it in .NET.
Yes you can, provided:
You will reconnect if you lose the connection
You can reset the connection state if something strange happens
You will detect if the connection "goes quiet", for example if a firewall timeout occurs
Basically it requires a good deal of attention to failure cases and correct recovery; connecting and disconnecting often is a lot easier.
I think, if there is a connection pooling mechanism, you'd better close the connection.
One reason for it is that you do not need to re-check if your connection is still alive or not.
If the application is using the connection there is no reason to close it. If you don't need the connection you should close it. If you were to have multiple applications connect to the database, you have a fixed number of connections to that database. That's why it's better to close when you are done and reopen when you need it.
From a security point of view, I'd say its better to close it after a query, just to be sure that no other program can inject it's own things into the opened connection.
As performance is conered, it is clearly better to have the connection opened through the whole time.
Your choice^^
No, I don't see any reason why not to leave a connection open and re-use it: after all, this is the whole point behind the various connection-pool technologies that are about (although these are generally reserved for multi-threaded situations where works are all operating on the same data source).
But, to expand on the answer by bobince, - just beacause you are not closing the connection, don't assume that something else won't: the connection could timeout, there could be connection issues or a hundred and one other reasons why your connection dies. You need to assume that the connection may not be there, and add logic to code for this exception-case.
It is not good practise in my opinion to keep the connections open.
Another aspect that speaks for closing connections every time is scaleability. It might be fine now to leave it open but what if you app is used by twice 3-times the amount of users. It's a pain in the neck to go back and change all the code. (i know i've done it :-)
Your problem will be solved if you use connection pooling in your code. You don't need to open and close connection so you save precious resources which are used while opening a connection. You just return the connection to a pool which when requested for a connection returns back a idle connection.
Of course I am of the opinion, get an instance of the connection, use it, commit/rollback your work and return it to the pool. I would not suggest keeping the connection open for so long.
One thing I didn't see in the other answers, yet: In case you have prepared statements or temporary tables they might block server resources till the connection is closed. But on the other hand it can be useful to keep the connection around for some time instead of recreating them every few moments.
You'll pay a performance penalty if you're constantly opening and closing connections. It might be wise to use connection pooling and a short wait_timeout if you are concerned that too many running copies of your app will eat up too many database connections.