Connection must be valid and open in Ddtek.Oracle.OracleConnection - c#

Need some help from Oracle app developers out there:
I have an C#.NET 4.0 application which updates and inserts into a table using DDTek.Oracle library. My app runs everyday for about 12 hours and this exception came exactly twice and 15 days apart and never before. On these days, it was running fine for hours(it did both inserts and updates during this period). And then this exception comes. I have read that this exception could be from a bad connection string, but as I said before, the app has been running fine for a while. Could this be a db or network issue or could it be something else?
System.InvalidOperationException: Connection must be valid and open
at DDTek.Oracle.OracleConnection.get_Session()
at DDTek.Oracle.OracleConnection.BeginDbTransaction(IsolationLevel isolationLevel)
at DDTek.Oracle.OracleConnection.BeginTransaction()
FYI(if this could be the cause), I have two connections on two threads. Each thread updates a different table.
PS: If anyone know a good documentation for DDTek. Please reply with a link.

From what you describe I can only speculate - there are several possibilities:
most providers offer built-in pooling, sometimes a connection in the pool becomes invalid and you get some strange behaviour
sometimes the network settings/firewalls/IDS... limit how long a TCP connection can stay open
sometimes a subtle bug (accessing same connection from 2 different threads) leads to strange problems
sometimes the DB server (or a DB firewall) limits how long a session can stay connected
sometimes memory problems cause such behaviour, almost every Oracle provider uses OCI under the hood which requires using unmanaged heap etc.
I had one provider leaking unmanaged memory (diagnosed via a memory profiler and fixec by the vendor rather fast)
sometimes when connected to a RAC one listener/node goes down and/or some failover happens leaving current connections invalid
As for a link to comprehensive documentation of DDTek.Oracle see here and here.

Related

Parallel execution of CREATE DATABASE statements result to an error but not on separate SQL Server instance

I am using the latest version of Entity Framework on my application (but I don't think EF is the issue here, just stating what ORM we are using) and have this multi-tenant architecture. I was doing some stress tests, built in C#, wherein it creates X-number of tasks that runs in parallel to do some stuff. At some point at the beginning of the whole process, it will create a new database for each task (each tenant in this case) and then continues to process the bulk of the operation. But on some tasks, it throws 2 SQL Exceptions on that exact part of my code where it tries to create a new database.
Exception #1:
Could not obtain exclusive lock on database 'model'. Retry the
operation later. CREATE DATABASE failed. Some file names listed could
not be created. Check related errors.
Exception #2:
Timeout expired. The timeout period elapsed prior to completion of
the operation or the server is not responding.
It's either of those two and throws on the same line of my code (when EF creates the database). Apparently in SQL Server, when creating a database it does it one at a time and locks the 'model' database (see here) thus some tasks that are waiting throws a timeout or that lock on 'model' error.
Those tests were done on our development SQL Server 2014 instance (12.0.4213) and if I execute, say, 100 parallel tasks there will bound to be an error thrown on some tasks or sometimes even nearly half the tasks I executed.
BUT here's the most disturbing part in all these, when testing it on my other SQL server instance (12.0.2000), which I have installed locally on my PC, no such error throws and completely finishes all the tasks I executed (even 1000 tasks in parallel!).
Solutions I've tried so far but didn't work:
Changed the timeout of the Object context in EF to infinite
Tried adding a longer or infinite timeout on the connection string
Tried adding a Retry strategy on EF and made it longer and run more often
Currently, trying to install Virtual machine with a similar environment to our Dev server (uses Windows Server 2014 R2) and test on specific version of SQL Server to try to see if the versions have anything to do with it (yeah, I'm that desperate :))
Anyway, here is a simple C# console application you can download and try to replicate the issue. This test app will execute N-number of tasks you input and simply creates a database and does cleanup right afterwards.
2 observations:
Since the underlying issue has something to do with concurrency, and access to a "resource" which at a key point only allows a single, but not a concurrent, accessor, it's unsurprising that you might be getting differing results on two different machines when executing highly concurrent scenarios under load. Further, SQL Server Engine differences might be involved. All of this is just par for the course for trying to figure out and debug concurrency issues, especially with an engine involved that has its own very strong notions of concurrency.
Rather than going against the grain of the situation by trying to make something work or fully explain a situation, when things are empirically not working, why not change approach by designing for cleaner handling of the problem?
One option: acknowledge the reality of SQL Server's need to have a exclusive lock on model db by regulating access via some kind of concurrency synchronization mechanism--a System.Threading.Monitor sounds about right for what is happening here and it would allow you to control what happens when there is a timeout, with a timeout of your choosing. This will help prevent the kind of locked up type scenario that may be happening on the SQL Server end, which would be an explanation for the current "timeouts" symptom (although stress load might be the sole explanation).
Another option: See if you can design in such a way that you don't need to synchronize at all. Get to a point where you never request more than one database create simultaneously. Some kind of queue of the create requests--and the queue is guaranteed to be serviced by, say, only one thread--with requesting tasks doing async/await patterns on the result of the creates.
Either way, you are going to have situations where this slows down to a crawl under stress testing, with super stressed loads causing failure. The key questions are:
Can your design handle some multiple of the likely worst case load and still show acceptable performance?
If failure does occur, is your response to the failure "controlled" in a way that you have designed for.
Probably you have different LockTimeoutSeconds and QueryTimeoutSeconds set on the development and local instances for SSDT (DacFx Deploy), which is deploying the databases.
For example LockTimeoutSeconds is used to set lock_timeout. If you have a small number here, this is the reason for
Could not obtain exclusive lock on database 'model'. Retry the operation later. CREATE DATABASE failed. Some file names listed could not be created. Check related errors.
You can use the query below to identify what timeout is set by SSDT
select session_id, lock_timeout, * from sys.dm_exec_sessions where login_name = 'username'
To increase the default timeout, find the identifier of the user, which is deploying the database here
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList
Then find the following registry key
HKEY_USERS\your user identifier\Microsoft\VisualStudio\your version\SQLDB\Database
and change the values for LockTimeoutSeconds and QueryTimeoutSeconds

C# Microsoft Access Connection Pooling

Intro:
Using VS 2013, .Net 4
creating a library to connect/use a Microsoft Access database (part of 3rd party application - choice of database is not an option) to be used by our parent product.
Reason to pool: connections being made by multiple tablet PCs located throughout an industrial facility. Concerns regarding performance.
What do I need to add to the connection string, how do I initialize it?
When and how do I kill it?
Has anyone dealt with this before?
Why:
Answers I have found so far are vaugue
For a System.Data.OleDb connection you apparently don't need to do anything to enable connection pooling. According to the MSDN article OLE DB, ODBC, and Oracle Connection Pooling (ADO.NET):
Connection Pooling for OleDb
The .NET Framework Data Provider for OLE DB automatically pools connections using OLE DB session pooling.
For an application using System.Data.Odbc you need to enable connection pooling for the Access ODBC driver by double-clicking the "Microsoft Access Driver ..." name on the "Connection Pooling" tab of the ODBC Administrator control panel (odbcad32.exe) and choosing "Pool Connections to this driver"
As stated in answers and comments to similar earlier questions (like this one), it's not too clear whether connection pooling will offer a significant benefit to an application that uses an Access database, but it is supported (ref: here, item #3) and it does seem to work based on what perfmon.exe displays for the "ODBC Connection Pooling" counters.
Just to have it available somewhere to help people (and to expand upon the info provided by #Gord Thompson), and since the documentation seems to be misleading at best...
In my experience and environment, Access MDB and OLEDB + Jet 4.0 do not have connection pooling on by default, it is actually off unless you turn it on. I'm pretty sure this is true for the ACE drivers as well, but I haven't recently done testing like I have for Jet.
Unlike ODBC (as far as I know), you can turn connection pooling on in the connection string with OLEDB. In your connection string, add the following command to turn on pooling (again, this default pertains to Access and may not be true of other databases): OLE DB Services = -1;
Here is a link to what different values are available and what they do
https://learn.microsoft.com/en-us/cpp/data/oledb/overriding-provider-service-defaults?view=msvc-170
Backup link in case MS does what MS does and kills doc links
https://www.ibm.com/docs/en/db2/11.1?topic=tips-connection-pooling
Does connection pooling help with Access?
Short answer, yes, it definitely does. To see the difference, brutalize Access by writing a loop that opens a connection, executes a query, then closes the connection and leave the "OLE DB SERVICES" command out, put a stopwatch in so you can time things and wrap it all up in the appropriate USING statements. You will see that the open/close overhead is significant and you will also very likely see Access will start choking on that overhead on occasion throwing errors like:
The database has been placed in a state by user 'Admin' on machine
'DESKTOP-XXXXXXX' that prevents it from being opened or locked.
It will also cause other applications connected to the database to choke with the same error occasionally.
Now turn on pooling in the connection string and you will see a HUGE performance increase and the occasional errors will go away.
This is not without drawback though. Access and OLEDB have an issue where if one connection writes data, it may not be available to other connections for up to 5 seconds or, in some cases even longer due to not only a read buffer, but Jet's built in async write to disk process that runs on another thread.
In many cases with OLEDB, when you open/close the connection and query data, data written from another connection is available much more quickly as I believe the connection open process starts you with a fresh read buffer. Connection pooling does exactly what you expect in this case and since the connection is already actually open, you are dealing with the existing connection buffer instead of starting clean and may have to wait up to 5 sec (or more) before data written by another connection is available.
Within the same app & pooling turned on, this may not be an issue because in the end, they'll likely (but probably not guaranteed to be) using the same connection anyways --- but if you have a different application (or same app on a different machine) needing data, you've got a problem.
So - depending on your environment and needs, connection pooling might be an asset or could be a liability.
I posted some additional info here on the buffer and ways to get around it
https://stackoverflow.com/a/74627552/2336839

Cannot open the shared memory region error

I have a user reporting this error when they're using my application.
The application is a .NET Winforms application running on Windows XP Embedded, using SQL Server CE 3.5 sp1, and Linq-To-SQL as the ORM. The database itself is located in a subdirectory my application creates in the My Documents folder. The user account is an adminstrator account on the system. There are no other applications or processes connecting to the database.
For the most part, the application seems to run fine. It starts up, can load data from and save data to the database. The user is using the application to access the database maybe a couple hundred times a day. They get this error, but only intermittently. Maybe 3-4 times a day.
In the code itself, all of the calls to the database are using a Linq-To-SQL data context that's wrapped in a using clause. So in other words:
using(MyDataContext db = new MyDataContext(ConnectionString))
{
List<blah> someList = db.SomeTable.Where(//selection criteria).ToList();
return(someList);
}
That's what pretty much all of the calls to the database look like (with the exception that the ones that save data obviously aren't selecting and returning anything). As I mentioned before, they have no issue 99% of the time but only get the shared memory error a few times a day.
My current "fix" is on application startup I simply read all of the data out of the database (there's not a lot) and cache it in memory and converted my database calls to read from the in-memory lists. So far, this seems to have fixed the problem. For a day and a half now they've reported no problems. But this is still bugging me, because I don't know what would cause the error in the first place.
While the application is accessing the database a few hundred times a day, it's typically not in rapid-fire succession. It's usually once every few minutes at the least. However, there is one use-case where there might be two calls one right after the other, as fast as possible. In other words, something like:
//user makes a selectio n on the screen
DatabaseCall1();
DatabaseCall2();
Both of those would follow the pattern in the code block above where they create a new context, do work, and then return. But these calls aren't asynchronous, so I would expect the connection would be closed and disposed of before DatabaseCall2 is invoked. However, could it be that something on the SQL Server CE end isn't closing the connection fast enough? It might explain why it's intermittent since maybe most of the time it doesn't have a problem? I should also mention that this exact program without the fix is installed on a few other systems with the exact same hardware and software (they're clones of each other), and users of the other systems have not reported any errors.
I'm stuck scratching my head because I can't reproduce this error on my development machine or a test machine, and answers to questions about this exception here and other places typically revolve around insufficient user permissions or the database on a shared network folder.
Check this previous post,I think you will find your answer :-
SQL Server CE - Internal error: Cannot open the shared memory region

oracleconnection close issue

I have an application that insert and update about 10000 entries into several tables in a OracleDatabase using ODP.net. I've separate the job in block of 100 entries.
At first the application was opening and closing the oracleconnection for each entries. The application was running fine for some blocks of entries but after a while (not always the same) it was just stopping running, it was still using memory but no CPU and no error was thrown. I found out it was when the application was calling the OracleConnection Close method.
I have changed it to open and close and connection at the beginning and the end of the application and everything is fine.
Although the fact that open and close the connection for each entries wasn't the proper way, my question is why it was just stopping at the method Close() of the OracleConnection?
Anyone has an idea?
Thanks in advance.
I can suggest two reason, both of which I've seen before.
First, if you have a long-running connection affecting a lot of records, it's possible that due to the time (or perhaps something is blocking the insert/update) and the connection pool manager is attempting to re-claim & recycle the connection.
Another one which is very difficult to debug is the possibility that your connections are going though a firewall, and the firewall is dropping long-running connections. If this is the case, you might experience the occasional problem when opening new connection from a pool - it should usable, but fails when you try to open it (I forget the exact symptoms & error messages, as this was a few years back).

MySQL .NET Connector 5.2.3 - Win 2k3 - Random Error - Unable to connect to any hosts - Restart website and it's fixed

I have a strange error on a Win 2k3 box running the MySQL connector 5.2.3. It's happened 3 times in the last 24 hours but only 4 times total in the last 6 months +.
[Exception: Exception of type 'System.Exception' was thrown.] MySql.Data.MySqlClient.NativeDriver.Open() +259
[MySqlException (0x80004005): Unable to connect to any of the specified MySQL hosts.]
If I restart the app pool and website in IIS, the problem is resolved temporarily, but as stated, it's happened 3 times now in the last 24 hours. No server changes during that time either.
Any ideas?
Here's a guess based on limited information.
I don't think you are properly disposing of your database connections. Make sure you have the connections wrapped in using clauses. Calling .Close() isn't good enough. That closes the connection, but it doesn't dispose of it.
Second, the reason why now instead of months ago is probably related to a combination of the amount of traffic you are now seeing versus before and the amount of database connections you are instantiating now versus before.
I just got called to help with an app that exhibited the exact same behavior. Everything was fine until one day it started crashing. A lot. They had the sql command objects wrapped in using clauses, but not the connection. As traffic increased the connection pool was filling up and bursting. Once we wrapped the connections in using clauses (forcing a call to dispose()), everything was smooth sailing.
UPDATE
I just wanted to clarify one thing. SqlConnection is a wrapper to an unmanaged resource; which is why it implements IDisposable. When your connection object goes out of scope, then it is available to be garbage collected. From a coding perspective, you really have no idea of when this might happen. It might be a few seconds, it might be considerably longer.
However the connection pool, which SqlConnection talks to, is a completely separate entity. It doesn't know the connection is no longer going to be used. The Dispose() method basically tells the connection pool that this particular piece of code is done with the connection. Which means that the connection pool can immediately reallocate those resources. Note that some of the MS documentation states that Close() is equivalent to Dispose() but my testing has shown that this simply isn't true.
If your site creates more connections than it is explicitly disposing of then the connection pool is potentially going to fill up based on when garbage collection takes place. It really depends on the number of connections created and the amount of traffic received. Higher traffic = more connections and longer periods of time between GC runs.
Now, a default IIS configuration gives you 100 executing threads per worker process. And the default configuration of the application pool is 100 connections. If you are disposing of your connections immediately after using them then you will never exceed the pool size. Even if you make a lot of db calls, the request thread is doing them one at a time.
However, if you add more worker processes, increase the number of threads per process, or fail to dispose of your connections, then you run into the very real possibility of exceeding the connection pool size. Note that you can control the pool size in your sql connection string.
Don't worry about performance of calling dispose then reopening the connection several times while processing a single page. The pool exists to keep connections alive with the server in order to make this a very fast process.

Categories

Resources