oracleconnection close issue - c#

I have an application that insert and update about 10000 entries into several tables in a OracleDatabase using ODP.net. I've separate the job in block of 100 entries.
At first the application was opening and closing the oracleconnection for each entries. The application was running fine for some blocks of entries but after a while (not always the same) it was just stopping running, it was still using memory but no CPU and no error was thrown. I found out it was when the application was calling the OracleConnection Close method.
I have changed it to open and close and connection at the beginning and the end of the application and everything is fine.
Although the fact that open and close the connection for each entries wasn't the proper way, my question is why it was just stopping at the method Close() of the OracleConnection?
Anyone has an idea?
Thanks in advance.

I can suggest two reason, both of which I've seen before.
First, if you have a long-running connection affecting a lot of records, it's possible that due to the time (or perhaps something is blocking the insert/update) and the connection pool manager is attempting to re-claim & recycle the connection.
Another one which is very difficult to debug is the possibility that your connections are going though a firewall, and the firewall is dropping long-running connections. If this is the case, you might experience the occasional problem when opening new connection from a pool - it should usable, but fails when you try to open it (I forget the exact symptoms & error messages, as this was a few years back).

Related

How to check if connection pools are cleared

I want to rename database file and even I use using with connection every time I have to call:
FirebirdSql.Data.FirebirdClient.FbConnection.ClearAllPools();
The problem is that this method doesn't block the thread and I don't know how to check if all connections are cleared, because if I get value from:
FirebirdSql.Data.FirebirdClient.FbConnection.ConnectionPoolsCount
It is zero immediately after the method, but I am still not able to rename the database file. If I set some timeout after the method (I tried 1s) then the file is not locked and I can rename it. The problem is that this timeout could be certainly different on different machines.
FWIK the only other method how to check if the file is not locked is to try the renaming in the loop with some timeout, but I can not be sure if the lock is made by connections from my application or from somewhere else.
So is there a better way, how I can wait until this method clears the connections?
Making it an answer for the sake of formatting lists.
#Artholl you can not safely rely upon your own disconnection for a bunch of reasons.
There may be other programs connected, not only this your running program. And unless you connect with SYSDBA or database creator or RDB$ADMIN role - you can not query if there are other connections now. However, you can query, from MON$ATTACHMENTS, the connections made with the same user as your CURRENT_CONNECTION. This might help you to check the state of your application's own pool. Just that there is little practical value in it.
in Firebird 3 in SuperServer mode there is the LINGER parameter, it means that server would keep the database open for some time after the last client disconnects, expecting that if some new client might decide to connect again then the PAGE CACHE for DB file is already in place. Like for middle-loaded WWW servers.
even in Firebird 2 every open database has some caches, and it would be installation-specific (firebird.conf) and database specific (gfix/gstat) how large the caches are. After the engine seeing all clients disconnected decided the database is to be closed - it starts with flushing the caches and demanding OS to flush their caches too ( there is no general hardware-independent way to demand RAID controllers and disks themselves to flush caches, or Firebird would try to make it too ). By default Firebird caches are small and preempting them to hardware layer should be fast, but still it is not instant.
Even if you checked that all other clients did disconnected, and then you disconnected yourself, and then you correctly guessed how long to wait for Linger and Caches, even then you still are not safe. You are subject to race conditions. At the very time you start doing something requiring explicit owning of DB there may happen some new client that would concurrently open his new connection.
So the correct approach would be not merely proving there is no database connection right NOW, but also ensuring there CAN NOT be any new connection in future, until you re-enable it.
So, as Mark said above, you have to use Shutdown methods to bring the database into no-connections-allowed state. And after you've done with file renaming and other manipulations - to switch it back to normal mode.
https://www.firebirdsql.org/file/documentation/reference_manuals/user_manuals/html/gfix-dbstartstop.html
If I was responsible for maintaining the firebird provider, I wouldn't want users to rely on such functionality.
Other applications could have the file open (you're only in control of connection pools in the current AppDomain), and the server might be running some kind of maintenance on the database.
So even if you can wait for the pools to be cleared, I'd argue that if you really really have to mess with these files, a more robust solution is to stop the firebird service instead (and wait for it to have fully stopped).

Checking for internet connection slows the load speed when disconnected

I want to create an easy autoupdate system in my program. It works fine, but I want it to proceed only when the user is connected to the internet.
I tried many ways, every worked, but when I'm disconnected from the internet, the time till the application loads is around 10 seconds, which is really slow. My program checks for the update on load and so does the connection test, which I think is the problem, because if I run the test inside a button click, it loads pretty fast, even when you are disconnected from the internet.
If you are curious, I tried to use every connection test I found, including System.Net.NetworkInformation.NetworkInterface.GetIsNetworkAvailable();.
Your problem is that checking for a connection has a timeout. When there's a connection it finds that out really fast (usually) and you don't notice the delay. When you don't have a connection it has to do more checks and wait for responses. I don't see anyway to adjust the timeout, and even if you could you'd risk it not detecting connections even if they were available.
You should run the check on a separate thread so that your GUI loading isn't disrupted.
Rather than checking at startup, check on a background thread while the application is running and update then. Any solution for checking connection can have a delay even if the internet is up, if there are DNS issues or just general slowness.

Connection must be valid and open in Ddtek.Oracle.OracleConnection

Need some help from Oracle app developers out there:
I have an C#.NET 4.0 application which updates and inserts into a table using DDTek.Oracle library. My app runs everyday for about 12 hours and this exception came exactly twice and 15 days apart and never before. On these days, it was running fine for hours(it did both inserts and updates during this period). And then this exception comes. I have read that this exception could be from a bad connection string, but as I said before, the app has been running fine for a while. Could this be a db or network issue or could it be something else?
System.InvalidOperationException: Connection must be valid and open
at DDTek.Oracle.OracleConnection.get_Session()
at DDTek.Oracle.OracleConnection.BeginDbTransaction(IsolationLevel isolationLevel)
at DDTek.Oracle.OracleConnection.BeginTransaction()
FYI(if this could be the cause), I have two connections on two threads. Each thread updates a different table.
PS: If anyone know a good documentation for DDTek. Please reply with a link.
From what you describe I can only speculate - there are several possibilities:
most providers offer built-in pooling, sometimes a connection in the pool becomes invalid and you get some strange behaviour
sometimes the network settings/firewalls/IDS... limit how long a TCP connection can stay open
sometimes a subtle bug (accessing same connection from 2 different threads) leads to strange problems
sometimes the DB server (or a DB firewall) limits how long a session can stay connected
sometimes memory problems cause such behaviour, almost every Oracle provider uses OCI under the hood which requires using unmanaged heap etc.
I had one provider leaking unmanaged memory (diagnosed via a memory profiler and fixec by the vendor rather fast)
sometimes when connected to a RAC one listener/node goes down and/or some failover happens leaving current connections invalid
As for a link to comprehensive documentation of DDTek.Oracle see here and here.

MySQL .NET Connector 5.2.3 - Win 2k3 - Random Error - Unable to connect to any hosts - Restart website and it's fixed

I have a strange error on a Win 2k3 box running the MySQL connector 5.2.3. It's happened 3 times in the last 24 hours but only 4 times total in the last 6 months +.
[Exception: Exception of type 'System.Exception' was thrown.] MySql.Data.MySqlClient.NativeDriver.Open() +259
[MySqlException (0x80004005): Unable to connect to any of the specified MySQL hosts.]
If I restart the app pool and website in IIS, the problem is resolved temporarily, but as stated, it's happened 3 times now in the last 24 hours. No server changes during that time either.
Any ideas?
Here's a guess based on limited information.
I don't think you are properly disposing of your database connections. Make sure you have the connections wrapped in using clauses. Calling .Close() isn't good enough. That closes the connection, but it doesn't dispose of it.
Second, the reason why now instead of months ago is probably related to a combination of the amount of traffic you are now seeing versus before and the amount of database connections you are instantiating now versus before.
I just got called to help with an app that exhibited the exact same behavior. Everything was fine until one day it started crashing. A lot. They had the sql command objects wrapped in using clauses, but not the connection. As traffic increased the connection pool was filling up and bursting. Once we wrapped the connections in using clauses (forcing a call to dispose()), everything was smooth sailing.
UPDATE
I just wanted to clarify one thing. SqlConnection is a wrapper to an unmanaged resource; which is why it implements IDisposable. When your connection object goes out of scope, then it is available to be garbage collected. From a coding perspective, you really have no idea of when this might happen. It might be a few seconds, it might be considerably longer.
However the connection pool, which SqlConnection talks to, is a completely separate entity. It doesn't know the connection is no longer going to be used. The Dispose() method basically tells the connection pool that this particular piece of code is done with the connection. Which means that the connection pool can immediately reallocate those resources. Note that some of the MS documentation states that Close() is equivalent to Dispose() but my testing has shown that this simply isn't true.
If your site creates more connections than it is explicitly disposing of then the connection pool is potentially going to fill up based on when garbage collection takes place. It really depends on the number of connections created and the amount of traffic received. Higher traffic = more connections and longer periods of time between GC runs.
Now, a default IIS configuration gives you 100 executing threads per worker process. And the default configuration of the application pool is 100 connections. If you are disposing of your connections immediately after using them then you will never exceed the pool size. Even if you make a lot of db calls, the request thread is doing them one at a time.
However, if you add more worker processes, increase the number of threads per process, or fail to dispose of your connections, then you run into the very real possibility of exceeding the connection pool size. Note that you can control the pool size in your sql connection string.
Don't worry about performance of calling dispose then reopening the connection several times while processing a single page. The pool exists to keep connections alive with the server in order to make this a very fast process.

MySql connection, can I leave it open?

Is it smart to keep the connection open throughout the entire session?
I made a C# application that connects to a MySql database, the program both reads and writes to it and the application has to be running about 10 hours a day non-stop.
Are there any risk attached to keeping the connection open instead of calling the close() function every time after you've plucked something from the database and opening it again when you need something new?
Leaving a connection open for a while is fine, as long as:
you don't have so many concurrently idle connections that you hit the MySQL connection limit;
you don't leave it open for hours without doing anything. The default MySQL connection wait_timeout is 8 hours; leave a connection inactive for that long and when you next come to use it you'll get a “MySQL server has gone away” error.
Since you're using ADO.NET, you can use ADO.NET's inbuilt connection pooling capabilities. Actually, let me refine that: you must always use ADO.NET's inbuilt connection pooling capabilities. By doing so you will get the .NET runtime to transparently manage your connections for you in the background. It will keep the connections open for a while even if you closed them and reuse them if you open a new connection. This is really fast stuff.
Make sure to mention in your connection string that you want pooled connections as it might not be the default behaviour.
You only need to create connections locally when you need them, since they're pooled in the backrgound so there's no overhead in creating a new connection:
using (var connection = SomeMethodThatCreatesAConnectionObject())
{
// do your stuff here
connection.Close(); // this is not necessary as
// Dispose() closes it anyway
// but still nice to do.
}
That's how you're supposed to do it in .NET.
Yes you can, provided:
You will reconnect if you lose the connection
You can reset the connection state if something strange happens
You will detect if the connection "goes quiet", for example if a firewall timeout occurs
Basically it requires a good deal of attention to failure cases and correct recovery; connecting and disconnecting often is a lot easier.
I think, if there is a connection pooling mechanism, you'd better close the connection.
One reason for it is that you do not need to re-check if your connection is still alive or not.
If the application is using the connection there is no reason to close it. If you don't need the connection you should close it. If you were to have multiple applications connect to the database, you have a fixed number of connections to that database. That's why it's better to close when you are done and reopen when you need it.
From a security point of view, I'd say its better to close it after a query, just to be sure that no other program can inject it's own things into the opened connection.
As performance is conered, it is clearly better to have the connection opened through the whole time.
Your choice^^
No, I don't see any reason why not to leave a connection open and re-use it: after all, this is the whole point behind the various connection-pool technologies that are about (although these are generally reserved for multi-threaded situations where works are all operating on the same data source).
But, to expand on the answer by bobince, - just beacause you are not closing the connection, don't assume that something else won't: the connection could timeout, there could be connection issues or a hundred and one other reasons why your connection dies. You need to assume that the connection may not be there, and add logic to code for this exception-case.
It is not good practise in my opinion to keep the connections open.
Another aspect that speaks for closing connections every time is scaleability. It might be fine now to leave it open but what if you app is used by twice 3-times the amount of users. It's a pain in the neck to go back and change all the code. (i know i've done it :-)
Your problem will be solved if you use connection pooling in your code. You don't need to open and close connection so you save precious resources which are used while opening a connection. You just return the connection to a pool which when requested for a connection returns back a idle connection.
Of course I am of the opinion, get an instance of the connection, use it, commit/rollback your work and return it to the pool. I would not suggest keeping the connection open for so long.
One thing I didn't see in the other answers, yet: In case you have prepared statements or temporary tables they might block server resources till the connection is closed. But on the other hand it can be useful to keep the connection around for some time instead of recreating them every few moments.
You'll pay a performance penalty if you're constantly opening and closing connections. It might be wise to use connection pooling and a short wait_timeout if you are concerned that too many running copies of your app will eat up too many database connections.

Categories

Resources