i have a question about closing connection in C#. Company has an application where data flows automatically online from app to DB. I would like to create my own ASP + C# application which will use select from data (DB table which is filled from company app) as source for independent report. My question: can closing of the connection in my app have influence on the second(company, very important app?) - record will miss in db due to close connection? or any other problems?
No, everything will be safe if you close it properly. I recommend you to use using construction always. It will be transformed into try-catch-finally and close resources automatically.
That totally depends on your use-case, if you open and leave open hundreds and hundreds if not thousands and thousands of empty connections, the SQL Server will slowly begin to have some performance degradation.
Think of it as you asking your boss for something, and you say, "Boss-man, I need to ask you a question." But you remind him hundreds and thousands of times a second, "I need to ask you a question." Anything else he tries to do will slowly begin to lose performance, as he has to process the fact that you are going to have to ask him a question. Similarly with Sql Server. Mind you, at this point you haven't actually asked your question yet.
If your DBMS is Microsoft SQL Server, see this article: https://msdn.microsoft.com/en-us/library/ms187030.aspx
SQL Server allows a maximum of 32,767 user connections.
If you open 32k connections to the server, two things will likely happen:
Your DBA will come to you and say "wtf mate?" by the time you get close. A likely argument will ensue in which case you and the DBA will probably end up yelling and creating a scene.
Your DBMS will reach the maximum connection limit and the other all will crap out.
Not saying that any of this will happen, that requires you to open 32,767 concurrent connections, but it just goes to further prove that you should open/close as required. Also, if your Application uses a pool of connections and you open n connections, and the pool limit (separate from SQL Server - mind you) is n, you just stopped your app from opening more.
Generally speaking, you should open your connections as late as possible, and close them as early as possible.
Related
I want to rename database file and even I use using with connection every time I have to call:
FirebirdSql.Data.FirebirdClient.FbConnection.ClearAllPools();
The problem is that this method doesn't block the thread and I don't know how to check if all connections are cleared, because if I get value from:
FirebirdSql.Data.FirebirdClient.FbConnection.ConnectionPoolsCount
It is zero immediately after the method, but I am still not able to rename the database file. If I set some timeout after the method (I tried 1s) then the file is not locked and I can rename it. The problem is that this timeout could be certainly different on different machines.
FWIK the only other method how to check if the file is not locked is to try the renaming in the loop with some timeout, but I can not be sure if the lock is made by connections from my application or from somewhere else.
So is there a better way, how I can wait until this method clears the connections?
Making it an answer for the sake of formatting lists.
#Artholl you can not safely rely upon your own disconnection for a bunch of reasons.
There may be other programs connected, not only this your running program. And unless you connect with SYSDBA or database creator or RDB$ADMIN role - you can not query if there are other connections now. However, you can query, from MON$ATTACHMENTS, the connections made with the same user as your CURRENT_CONNECTION. This might help you to check the state of your application's own pool. Just that there is little practical value in it.
in Firebird 3 in SuperServer mode there is the LINGER parameter, it means that server would keep the database open for some time after the last client disconnects, expecting that if some new client might decide to connect again then the PAGE CACHE for DB file is already in place. Like for middle-loaded WWW servers.
even in Firebird 2 every open database has some caches, and it would be installation-specific (firebird.conf) and database specific (gfix/gstat) how large the caches are. After the engine seeing all clients disconnected decided the database is to be closed - it starts with flushing the caches and demanding OS to flush their caches too ( there is no general hardware-independent way to demand RAID controllers and disks themselves to flush caches, or Firebird would try to make it too ). By default Firebird caches are small and preempting them to hardware layer should be fast, but still it is not instant.
Even if you checked that all other clients did disconnected, and then you disconnected yourself, and then you correctly guessed how long to wait for Linger and Caches, even then you still are not safe. You are subject to race conditions. At the very time you start doing something requiring explicit owning of DB there may happen some new client that would concurrently open his new connection.
So the correct approach would be not merely proving there is no database connection right NOW, but also ensuring there CAN NOT be any new connection in future, until you re-enable it.
So, as Mark said above, you have to use Shutdown methods to bring the database into no-connections-allowed state. And after you've done with file renaming and other manipulations - to switch it back to normal mode.
https://www.firebirdsql.org/file/documentation/reference_manuals/user_manuals/html/gfix-dbstartstop.html
If I was responsible for maintaining the firebird provider, I wouldn't want users to rely on such functionality.
Other applications could have the file open (you're only in control of connection pools in the current AppDomain), and the server might be running some kind of maintenance on the database.
So even if you can wait for the pools to be cleared, I'd argue that if you really really have to mess with these files, a more robust solution is to stop the firebird service instead (and wait for it to have fully stopped).
I have written many desktop applications and all have gone great using a mysql connection to a database and used sql to query the database. I now want to start a larger project and it feels "wrong" to make many many database connections when I could connect to the server as client - server relationship and just query without having to keep opening and closing connections.
I have done a fair bit of digging around on google but to o avail. I think it's a case of I know what want to search, but not what to search for.
Any gentle nudge in the right direction would be greatly appreciated!
Generally it is accepted best practice to open a database connection, perform some actions and then close the connection.
It is best not to worry about the efficiencies of using lots of connections in this fashion. SqlServer deals with this quite nicely using connection pooling http://msdn.microsoft.com/en-us/library/8xx3tyca(v=vs.110).aspx
If you decide to keep connections open throughout the use of your application you run the risk of having lots of idle connections sitting around which is more "wrong" than opening and closing.
Obviously there are exceptions to these rules (such as if you find yourself opening 100s of connections in very short succession)... but this is general advice.
I'm investigating some performance issues in our product. We have several threads (order of tens) that are each doing some form of polling every couple of seconds, and every time we do that we are opening a connection to the DB, querying, then disposing.
So I'm wondering, would it be more efficient for each thread to keep an open connection to the DB so we don't have to go through the overhead of opening a connection every 2 seconds? Or is it better to stick with the textbook using blocks?
First thing to learn about is Connection pooling. You're already using it, don't change your code.
The question becomes: how many connections to claim in my config file?
And that's easy to change and measure.
As mentioned, connection pooling should take care of it but if you are beating on the database with messaging or something like that to check on the status of things every few seconds then you could be filling up the database pool very quickly. If you are on SQL Server, do an SP_WHO2 on the database in a query window and you'll see a lot of information: number of spids (connections open), blocking, etc.
In general, connection setup and teardown is expensive; doing this multiple times in tens of threads might be crippling; note however that the solution you use might already be pooling connections for you (even if you're not aware of it), so this may not be necessary (see your manuals and configuration).
On the other hand, if you decide to implement some sort of connection pooling by yourself, check that your DB server could handle tens of extra connections (it probably should).
Can any one tell what is the disadvantage of Using
MicrosoftApplicationsDataBlock.dll(SQLHelper class).
How can we sustain the maximum connection requests in a .net application
If we have lakhs of requests at a time then
is it ok to use
MicrosoftApplicationsDataBlock.dll(SQLHelper class).
More "modern" dataaccess libraries are generally preferable, they provide better performance, flexibility and usability. I would generally avoid the old SQLHelper class if possible. :) I worked on an old project where a dependency on the SQLHelper class kept us from upgrading from .NET 1.1 to .NET 4.
For awesome performance, you may want to take a look at Dapper, it's used here at Stackoverflow and is very fast and easy to use.
But if you're looking at 100k simultaneous requests (second, minute, day??) you probably want to avoid the database altogether. Look at caching, either ASP.NETs own built-in or maybe something like the Windows Server AppFabric Cache.
Disadvanatges of SQlHelper -> Dont think there is any . You get a lot off code for free to open and close connection , transaction handling etc... nothing that you cannot write yourself but number of connections that you can send from your app is not a factor of the SQLhelper or any other DbHelper you use. In any scenario you call system.data.sqlclient which is an API to connect and work with sqlserver...
When you launch N connections they all go the SQL Server Scheduler services. If all the CPUs are busy working on available SPIDs(processes) the new ones go in queue. You can see then usign sp_who2 , or select * from sys.sysprocesses.
The waiting SPIDs are offered CPU cycles at intervals (based on some kind of algo that I am not sure of) . This is called SOS Scheduler Yeild where one process yeilds the scheduler to other... Now this will fine till you dont reach maximum concurrent connections that server can hold. For different version of SQL Server (developer/enterprise etc) this is different. When you reach this MAX no of concurrent connections the SQL Server as no more threads left in its thread pool to allow your app to get new connections.. in these scenario you will get SQL.Exception of connection timed out...
Long story short you can open as many connection and keep them opened as long as you want using sqlhelper or traditional connection.open. In good practice you should open a connection , do an atomic transaction , close the connection and dont open too many connections coz your box (sql) will run out of connection handles to provide to your app..
SQL helper is just a helper , best practices of ADO.NET programming still applies no matter your use it or dont..
Is it smart to keep the connection open throughout the entire session?
I made a C# application that connects to a MySql database, the program both reads and writes to it and the application has to be running about 10 hours a day non-stop.
Are there any risk attached to keeping the connection open instead of calling the close() function every time after you've plucked something from the database and opening it again when you need something new?
Leaving a connection open for a while is fine, as long as:
you don't have so many concurrently idle connections that you hit the MySQL connection limit;
you don't leave it open for hours without doing anything. The default MySQL connection wait_timeout is 8 hours; leave a connection inactive for that long and when you next come to use it you'll get a “MySQL server has gone away” error.
Since you're using ADO.NET, you can use ADO.NET's inbuilt connection pooling capabilities. Actually, let me refine that: you must always use ADO.NET's inbuilt connection pooling capabilities. By doing so you will get the .NET runtime to transparently manage your connections for you in the background. It will keep the connections open for a while even if you closed them and reuse them if you open a new connection. This is really fast stuff.
Make sure to mention in your connection string that you want pooled connections as it might not be the default behaviour.
You only need to create connections locally when you need them, since they're pooled in the backrgound so there's no overhead in creating a new connection:
using (var connection = SomeMethodThatCreatesAConnectionObject())
{
// do your stuff here
connection.Close(); // this is not necessary as
// Dispose() closes it anyway
// but still nice to do.
}
That's how you're supposed to do it in .NET.
Yes you can, provided:
You will reconnect if you lose the connection
You can reset the connection state if something strange happens
You will detect if the connection "goes quiet", for example if a firewall timeout occurs
Basically it requires a good deal of attention to failure cases and correct recovery; connecting and disconnecting often is a lot easier.
I think, if there is a connection pooling mechanism, you'd better close the connection.
One reason for it is that you do not need to re-check if your connection is still alive or not.
If the application is using the connection there is no reason to close it. If you don't need the connection you should close it. If you were to have multiple applications connect to the database, you have a fixed number of connections to that database. That's why it's better to close when you are done and reopen when you need it.
From a security point of view, I'd say its better to close it after a query, just to be sure that no other program can inject it's own things into the opened connection.
As performance is conered, it is clearly better to have the connection opened through the whole time.
Your choice^^
No, I don't see any reason why not to leave a connection open and re-use it: after all, this is the whole point behind the various connection-pool technologies that are about (although these are generally reserved for multi-threaded situations where works are all operating on the same data source).
But, to expand on the answer by bobince, - just beacause you are not closing the connection, don't assume that something else won't: the connection could timeout, there could be connection issues or a hundred and one other reasons why your connection dies. You need to assume that the connection may not be there, and add logic to code for this exception-case.
It is not good practise in my opinion to keep the connections open.
Another aspect that speaks for closing connections every time is scaleability. It might be fine now to leave it open but what if you app is used by twice 3-times the amount of users. It's a pain in the neck to go back and change all the code. (i know i've done it :-)
Your problem will be solved if you use connection pooling in your code. You don't need to open and close connection so you save precious resources which are used while opening a connection. You just return the connection to a pool which when requested for a connection returns back a idle connection.
Of course I am of the opinion, get an instance of the connection, use it, commit/rollback your work and return it to the pool. I would not suggest keeping the connection open for so long.
One thing I didn't see in the other answers, yet: In case you have prepared statements or temporary tables they might block server resources till the connection is closed. But on the other hand it can be useful to keep the connection around for some time instead of recreating them every few moments.
You'll pay a performance penalty if you're constantly opening and closing connections. It might be wise to use connection pooling and a short wait_timeout if you are concerned that too many running copies of your app will eat up too many database connections.