How to check if connection pools are cleared - c#

I want to rename database file and even I use using with connection every time I have to call:
FirebirdSql.Data.FirebirdClient.FbConnection.ClearAllPools();
The problem is that this method doesn't block the thread and I don't know how to check if all connections are cleared, because if I get value from:
FirebirdSql.Data.FirebirdClient.FbConnection.ConnectionPoolsCount
It is zero immediately after the method, but I am still not able to rename the database file. If I set some timeout after the method (I tried 1s) then the file is not locked and I can rename it. The problem is that this timeout could be certainly different on different machines.
FWIK the only other method how to check if the file is not locked is to try the renaming in the loop with some timeout, but I can not be sure if the lock is made by connections from my application or from somewhere else.
So is there a better way, how I can wait until this method clears the connections?

Making it an answer for the sake of formatting lists.
#Artholl you can not safely rely upon your own disconnection for a bunch of reasons.
There may be other programs connected, not only this your running program. And unless you connect with SYSDBA or database creator or RDB$ADMIN role - you can not query if there are other connections now. However, you can query, from MON$ATTACHMENTS, the connections made with the same user as your CURRENT_CONNECTION. This might help you to check the state of your application's own pool. Just that there is little practical value in it.
in Firebird 3 in SuperServer mode there is the LINGER parameter, it means that server would keep the database open for some time after the last client disconnects, expecting that if some new client might decide to connect again then the PAGE CACHE for DB file is already in place. Like for middle-loaded WWW servers.
even in Firebird 2 every open database has some caches, and it would be installation-specific (firebird.conf) and database specific (gfix/gstat) how large the caches are. After the engine seeing all clients disconnected decided the database is to be closed - it starts with flushing the caches and demanding OS to flush their caches too ( there is no general hardware-independent way to demand RAID controllers and disks themselves to flush caches, or Firebird would try to make it too ). By default Firebird caches are small and preempting them to hardware layer should be fast, but still it is not instant.
Even if you checked that all other clients did disconnected, and then you disconnected yourself, and then you correctly guessed how long to wait for Linger and Caches, even then you still are not safe. You are subject to race conditions. At the very time you start doing something requiring explicit owning of DB there may happen some new client that would concurrently open his new connection.
So the correct approach would be not merely proving there is no database connection right NOW, but also ensuring there CAN NOT be any new connection in future, until you re-enable it.
So, as Mark said above, you have to use Shutdown methods to bring the database into no-connections-allowed state. And after you've done with file renaming and other manipulations - to switch it back to normal mode.
https://www.firebirdsql.org/file/documentation/reference_manuals/user_manuals/html/gfix-dbstartstop.html

If I was responsible for maintaining the firebird provider, I wouldn't want users to rely on such functionality.
Other applications could have the file open (you're only in control of connection pools in the current AppDomain), and the server might be running some kind of maintenance on the database.
So even if you can wait for the pools to be cleared, I'd argue that if you really really have to mess with these files, a more robust solution is to stop the firebird service instead (and wait for it to have fully stopped).

Related

How/Where can I find out if i'm forgetting to dispose Redis connections?

I need to create a tool or some observation mechanism to report me how many redis connections I have going on. We're having problems with this and we're only getting actual data from production environment (azure), and when it's there, it's kinda too-late...
So, in a local machine (where every developer has a redis installed for testing reasons), how can I know how many opened connections I have at a given moment? The ideal number would be zero, cause you open it, get/set whatever, close... right?
Run CLIENT LIST, or INFO against your Redis instance to find out who's connected at any given moment.
The ideal number would be zero, cause you open it, get/set whatever, close... right?
Actually, not necessarily - some clients offer the possibility of keeping connections open for pooling purposes.
Use a class factory to create your redis connections, open them and lease them out to the consumer classes. The consumer classes return them to the factory for reuse or closure.

Create a 'Licensing' feature with SQL-Server

I want to implement the following interface on a 2-Tier application with an MS SQL-Server 2008R2 (i.e. no app server in between)
interface ILicense {
void Acquire(string license);
void Release(string license);
}
However, I want to release the license even if the application is killed or bombs out without calling the Release method. I also want to avoid using a timer which refreshes the license every minute or so.
So I thought: Use a dedicated SqlConnection together with the sp_getapplock and sp_releaseapplock SP because that's what they are seemed to be made for. Now I found out that the SP only work from within a transaction, so I would need to keep the transaction open all the time (i.e. while the application is running). Anyway, it works that way. The application starts, opens the connection, starts the transaction, and locks the license.
When the application terminates, the connection is closed, everything is rolled back and the license is released. Super.
Whenever the running app needs to switch licenses (e.g. for another module), it calls Release on the old license and then Acquire on the new one. Cool.
Now to my question(s):
Is it acceptable to have an open (uncommitted) transaction open on a separate connection for a long time?
Are there any better possibilities to implement such a 'lock' mechanism? The problem is that the license shall be released even if the application terminates unexpectedly. I thought of some sort of 'logout' trigger, but that does not exist in SQL-Server 2008R2
I am by no means the SQL or DB guru that some of the members of this site are but your setup brings up a few concerns or things to consider.
this could really limit the number of concurrent users that your application could have especially in a 2-tier architecture. Now in a 3 tier approach the app server would manage and pool these connections/transactions but then you would lose the ability to use those stored procs to implement your licensing mechanism, i believe.
with the transaction being open for some indeterminate period of time I would worry about the possibility of the tempdb growing too big or exceeding the space allocated to it. i don't know what is going on in the app and if there is anything else going on in that transaction, my guess is no but thought i would mention it.
I hope i am not getting my SQL versions mixed up here but transaction wraparound could cause the db to shutdown.
This limits your app significantly as the data in the transaction has a lock on it that won't be released until you commot or rollback.
There must be a more elegant way to implement a licensing model that doesn't rely on leaving a transaction open for the life of the app or app module. If you have a two tier app then that implies that the client always has some kind of connectivity so maybe generate some kind of unique id for the client and either add a call home method or if you really are set on there being an instantaneous verification then everytime the client performs an action that queries the db have it check to see if the client is properly licensed etc.
Lastly, in all of the SQL teachings I have received from other db guys who actually really know there stuff this kind of setup (long running open transaction) were never recommended unless there was a very specific need that could not be solved otherwise.

Closing connection to DB

i have a question about closing connection in C#. Company has an application where data flows automatically online from app to DB. I would like to create my own ASP + C# application which will use select from data (DB table which is filled from company app) as source for independent report. My question: can closing of the connection in my app have influence on the second(company, very important app?) - record will miss in db due to close connection? or any other problems?
No, everything will be safe if you close it properly. I recommend you to use using construction always. It will be transformed into try-catch-finally and close resources automatically.
That totally depends on your use-case, if you open and leave open hundreds and hundreds if not thousands and thousands of empty connections, the SQL Server will slowly begin to have some performance degradation.
Think of it as you asking your boss for something, and you say, "Boss-man, I need to ask you a question." But you remind him hundreds and thousands of times a second, "I need to ask you a question." Anything else he tries to do will slowly begin to lose performance, as he has to process the fact that you are going to have to ask him a question. Similarly with Sql Server. Mind you, at this point you haven't actually asked your question yet.
If your DBMS is Microsoft SQL Server, see this article: https://msdn.microsoft.com/en-us/library/ms187030.aspx
SQL Server allows a maximum of 32,767 user connections.
If you open 32k connections to the server, two things will likely happen:
Your DBA will come to you and say "wtf mate?" by the time you get close. A likely argument will ensue in which case you and the DBA will probably end up yelling and creating a scene.
Your DBMS will reach the maximum connection limit and the other all will crap out.
Not saying that any of this will happen, that requires you to open 32,767 concurrent connections, but it just goes to further prove that you should open/close as required. Also, if your Application uses a pool of connections and you open n connections, and the pool limit (separate from SQL Server - mind you) is n, you just stopped your app from opening more.
Generally speaking, you should open your connections as late as possible, and close them as early as possible.

MySQL .NET Connector 5.2.3 - Win 2k3 - Random Error - Unable to connect to any hosts - Restart website and it's fixed

I have a strange error on a Win 2k3 box running the MySQL connector 5.2.3. It's happened 3 times in the last 24 hours but only 4 times total in the last 6 months +.
[Exception: Exception of type 'System.Exception' was thrown.] MySql.Data.MySqlClient.NativeDriver.Open() +259
[MySqlException (0x80004005): Unable to connect to any of the specified MySQL hosts.]
If I restart the app pool and website in IIS, the problem is resolved temporarily, but as stated, it's happened 3 times now in the last 24 hours. No server changes during that time either.
Any ideas?
Here's a guess based on limited information.
I don't think you are properly disposing of your database connections. Make sure you have the connections wrapped in using clauses. Calling .Close() isn't good enough. That closes the connection, but it doesn't dispose of it.
Second, the reason why now instead of months ago is probably related to a combination of the amount of traffic you are now seeing versus before and the amount of database connections you are instantiating now versus before.
I just got called to help with an app that exhibited the exact same behavior. Everything was fine until one day it started crashing. A lot. They had the sql command objects wrapped in using clauses, but not the connection. As traffic increased the connection pool was filling up and bursting. Once we wrapped the connections in using clauses (forcing a call to dispose()), everything was smooth sailing.
UPDATE
I just wanted to clarify one thing. SqlConnection is a wrapper to an unmanaged resource; which is why it implements IDisposable. When your connection object goes out of scope, then it is available to be garbage collected. From a coding perspective, you really have no idea of when this might happen. It might be a few seconds, it might be considerably longer.
However the connection pool, which SqlConnection talks to, is a completely separate entity. It doesn't know the connection is no longer going to be used. The Dispose() method basically tells the connection pool that this particular piece of code is done with the connection. Which means that the connection pool can immediately reallocate those resources. Note that some of the MS documentation states that Close() is equivalent to Dispose() but my testing has shown that this simply isn't true.
If your site creates more connections than it is explicitly disposing of then the connection pool is potentially going to fill up based on when garbage collection takes place. It really depends on the number of connections created and the amount of traffic received. Higher traffic = more connections and longer periods of time between GC runs.
Now, a default IIS configuration gives you 100 executing threads per worker process. And the default configuration of the application pool is 100 connections. If you are disposing of your connections immediately after using them then you will never exceed the pool size. Even if you make a lot of db calls, the request thread is doing them one at a time.
However, if you add more worker processes, increase the number of threads per process, or fail to dispose of your connections, then you run into the very real possibility of exceeding the connection pool size. Note that you can control the pool size in your sql connection string.
Don't worry about performance of calling dispose then reopening the connection several times while processing a single page. The pool exists to keep connections alive with the server in order to make this a very fast process.

MySql connection, can I leave it open?

Is it smart to keep the connection open throughout the entire session?
I made a C# application that connects to a MySql database, the program both reads and writes to it and the application has to be running about 10 hours a day non-stop.
Are there any risk attached to keeping the connection open instead of calling the close() function every time after you've plucked something from the database and opening it again when you need something new?
Leaving a connection open for a while is fine, as long as:
you don't have so many concurrently idle connections that you hit the MySQL connection limit;
you don't leave it open for hours without doing anything. The default MySQL connection wait_timeout is 8 hours; leave a connection inactive for that long and when you next come to use it you'll get a “MySQL server has gone away” error.
Since you're using ADO.NET, you can use ADO.NET's inbuilt connection pooling capabilities. Actually, let me refine that: you must always use ADO.NET's inbuilt connection pooling capabilities. By doing so you will get the .NET runtime to transparently manage your connections for you in the background. It will keep the connections open for a while even if you closed them and reuse them if you open a new connection. This is really fast stuff.
Make sure to mention in your connection string that you want pooled connections as it might not be the default behaviour.
You only need to create connections locally when you need them, since they're pooled in the backrgound so there's no overhead in creating a new connection:
using (var connection = SomeMethodThatCreatesAConnectionObject())
{
// do your stuff here
connection.Close(); // this is not necessary as
// Dispose() closes it anyway
// but still nice to do.
}
That's how you're supposed to do it in .NET.
Yes you can, provided:
You will reconnect if you lose the connection
You can reset the connection state if something strange happens
You will detect if the connection "goes quiet", for example if a firewall timeout occurs
Basically it requires a good deal of attention to failure cases and correct recovery; connecting and disconnecting often is a lot easier.
I think, if there is a connection pooling mechanism, you'd better close the connection.
One reason for it is that you do not need to re-check if your connection is still alive or not.
If the application is using the connection there is no reason to close it. If you don't need the connection you should close it. If you were to have multiple applications connect to the database, you have a fixed number of connections to that database. That's why it's better to close when you are done and reopen when you need it.
From a security point of view, I'd say its better to close it after a query, just to be sure that no other program can inject it's own things into the opened connection.
As performance is conered, it is clearly better to have the connection opened through the whole time.
Your choice^^
No, I don't see any reason why not to leave a connection open and re-use it: after all, this is the whole point behind the various connection-pool technologies that are about (although these are generally reserved for multi-threaded situations where works are all operating on the same data source).
But, to expand on the answer by bobince, - just beacause you are not closing the connection, don't assume that something else won't: the connection could timeout, there could be connection issues or a hundred and one other reasons why your connection dies. You need to assume that the connection may not be there, and add logic to code for this exception-case.
It is not good practise in my opinion to keep the connections open.
Another aspect that speaks for closing connections every time is scaleability. It might be fine now to leave it open but what if you app is used by twice 3-times the amount of users. It's a pain in the neck to go back and change all the code. (i know i've done it :-)
Your problem will be solved if you use connection pooling in your code. You don't need to open and close connection so you save precious resources which are used while opening a connection. You just return the connection to a pool which when requested for a connection returns back a idle connection.
Of course I am of the opinion, get an instance of the connection, use it, commit/rollback your work and return it to the pool. I would not suggest keeping the connection open for so long.
One thing I didn't see in the other answers, yet: In case you have prepared statements or temporary tables they might block server resources till the connection is closed. But on the other hand it can be useful to keep the connection around for some time instead of recreating them every few moments.
You'll pay a performance penalty if you're constantly opening and closing connections. It might be wise to use connection pooling and a short wait_timeout if you are concerned that too many running copies of your app will eat up too many database connections.

Categories

Resources