Proper way to deal with database connectivity issue - c#

I getting below error on trying to connect with the database :
A network-related or instance-specific error occurred while
establishing a connection to SQL Server. The server was not found or
was not accessible. Verify that the instance name is correct and that
SQL Server is configured to allow remote connections. (provider: Named
Pipes Provider, error: 40 - Could not open a connection to SQL Server)
Now sometimes i get this error and sometimes i dont so for eg:When i run my program for the first time,it open connection successfully and when i run for the second time i get this error and the next moment when i run my program again then i dont get error.
When i try to connect to same database server through SSMS then i am able to connect successfully but i am getting this network issue in my program only.
Database is not in my LOCAL.Its on AZURE.
I dont get this error with my local database.
Code :
public class AddOperation
{
public void Start()
{
using (var processor = new MyProcessor())
{
for (int i = 0; i < 2; i++)
{
if(i==0)
{
var connection = new SqlConnection("Connection string 1");
processor.Process(connection);
}
else
{
var connection = new SqlConnection("Connection string 2");
processor.Process(connection);
}
}
}
}
}
public class MyProcessor : IDisposable
{
public void Process(DbConnection cn)
{
using (var cmd = cn.CreateCommand())
{
cmd.CommandText = "query";
cmd.CommandTimeout = 1800;
cn.Open();//Sometimes work sometimes dont
using (var reader = cmd.ExecuteReader(CommandBehavior.CloseConnection))
{
//code
}
}
}
}
So i am confused with 2 things :
1) ConnectionTimeout : Whether i should increase connectiontimeout and will this solve my unusual connection problem ?
2) Retry Attempt Policy : Should i implement retry connection mechanism like below :
public static void OpenConnection(DbConnection cn, int maxAttempts = 1)
{
int attempts = 0;
while (true)
{
try
{
cn.Open();
return;
}
catch
{
attempts++;
if (attempts >= maxAttempts) throw;
}
}
}
I am confused with this 2 above options.
Can anybody please suggest me what would be the better way to deal with this problem?

Use a new version of .NET (4.6.1 or later) and then take advantage of the built-in resiliency features:
ConnectRetryCount, ConnectRetryInterval and Connection Timeout.
See the for more info: https://learn.microsoft.com/en-us/azure/sql-database/sql-database-connectivity-issues#net-sqlconnection-parameters-for-connection-retry

All applications that communicate with remote service are sensitive to transient faults.
As mentioned in other answers, if your client program connects to SQL Database by using the .NET Framework class System.Data.SqlClient.SqlConnection, use .NET 4.6.1 or later (or .NET Core) so that you can use its connection retry feature.
When you build the connection string for your SqlConnection object, coordinate the values among the following parameters:
ConnectRetryCount:  Default is 1. Range is 0 through 255.
ConnectRetryInterval:  Default is 1 second. Range is 1 through 60.
Connection Timeout:  Default is 15 seconds. Range is 0 through 2147483647.
Specifically, your chosen values should make the following equality true:
Connection Timeout = ConnectRetryCount * ConnectionRetryInterval
Now, Coming to option 2, when you app has custom retry logic, it will increase total retry times - for each custom retry it will try for ConnectRetryCount times. e.g. if ConnectRetryCount = 3 and custom retry = 5, it will attempt 15 tries. You might not need that many retries.
If you only consider custom retry vs Connection Timeout:
Connection Timeout occurs usually due to lossy network - network with higher packet losses (e.g. cellular or weak WiFi) or high traffic load. It's up to you choose best strategy of using among them.
Below guidelines would be helpful to troubleshoot transient errors:
https://learn.microsoft.com/en-us/azure/sql-database/sql-database-connectivity-issues
https://learn.microsoft.com/en-in/azure/architecture/best-practices/transient-faults

As you can read here a retry logic is recommended even for a SQL Server installed on an Azure VM (IaaS).
FAULT HANDLING: Your application code includes retry logic and
transient fault handling? Including proper retry logic and transient
fault handling remediation in the code should be a universal best
practice, both on-premises and in the cloud, either IaaS or PaaS. If
this characteristic is missing, application problems may raise on both
Azure SQLDB and SQL Server in Azure VM, but in this scenario the
latter is recommended over the former.
An incremental retry logic is recommended.
There are two basic approaches to instantiating the objects from the application block that your application requires. In the first approach, you can explicitly instantiate all the objects in code, as shown in the following code snippet:
var retryStrategy = new Incremental(5, TimeSpan.FromSeconds(1),
TimeSpan.FromSeconds(2));
var retryPolicy =
new RetryPolicy<SqlDatabaseTransientErrorDetectionStrategy>(retryStrategy);
In the second approach, you can instantiate and configure the objects from configuration data as shown in the following code snippet:
// Load policies from the configuration file.
// SystemConfigurationSource is defined in
// Microsoft.Practices.EnterpriseLibrary.Common.
using (var config = new SystemConfigurationSource())
{
var settings = RetryPolicyConfigurationSettings.GetRetryPolicySettings(config);
// Initialize the RetryPolicyFactory with a RetryManager built from the
// settings in the configuration file.
RetryPolicyFactory.SetRetryManager(settings.BuildRetryManager());
var retryPolicy = RetryPolicyFactory.GetRetryPolicy
<SqlDatabaseTransientErrorDetectionStrategy>("Incremental Retry Strategy");
...
// Use the policy to handle the retries of an operation.
}
For more information, please visit this documentation.

Consider using Polly.
You could use a simple piece of code like -
RetryPolicy retryPolicy = Policy.Handle<Exception>()
.WaitAndRetry(3, retryAttempt =>
TimeSpan.FromSeconds(retryAttempt));
var result = retryPolicy.Execute(() => someClass.DoSomething());
This will retry the request up to three times.

It is completely possible that a connection can drop. "Fallacies of Distributed Computing" :).
It could be network connectivity issue. Could be at any end.
I would recommend: (assuming firewall is enabled for your machine on Azure)
Ping the server and see if there is any loss.
ping (server).database.windows.net
tracert
telnet can also be your friend.
The above three should help you to pin-point where the problem is.
I think your retry logic is fine.
Regarding you question
Increase Timeout
Only if you are sure that your query will take long time. If for a simple insert you have to increase timeout problem could be network connectivity.
Retry Logic
As already posted, it's now part of framework which you can utilise or the one you created should be fine. Ideally, it's good to have retry logic, even if you are sure about connectivity and speed. Just in case :)

You should increase the timeout because the time taken to establish a connection to a SQL server has many steps, hence it takes some time when it goes for establishing the connection for the first time. After establishment of the connection, the connection is pooled in the memory for re-use in subsequent queries.
Please refer below link for more detailed understanding on connection-pooling:
https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/sql-server-connection-pooling
As you mentioned that this error generates sometimes, and not always, so there might be some network and connectivity factors for that. The default timeout for SQL connection is 15 seconds. I think if you change it to 30 seconds, it should work.

Are you using SQL Express or Workgroup Edition? If so, it's possible that the server is too busy to respond.
To rule out network problems, from a command prompt, do a PING -t SqlServername. Does every ping come back, or are some lost? This can be an indicator of network interruptions that might also cause this error, like a faulty switch. If they are all lost then (given that your database connection sometimes works) it is likely that ping is being blocked by a firewall somewhere: it may help diagnosis if you find that block and temporarily unblock it.
The error message indicates that you are using Named pipes. Are you using Named pipes on purpose? For most scenarios (including Azure database) I'd suggest enabling TCP/IP and disabling Named Pipes, in SQL Server Configuration Manager.
Depending how 'far away' your Azure database is, the delays because of routers and firewalls sometimes upset Kerberos and/or related timings. You can overcome this by using the port in the connection string to avoid the roundtrip to port 1434 to enumerate the instance. I assume you're already using a FQDN. For example: server\instance,port

Related

Solving a timeout error for SQL query

I am getting this error:
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
I know there are already guides out there to help solve this but they are not working for me. What am I missing or where should I add the code to these SQL statements in my C# program:
String sql = project1.Properties.Resources.myQueryData;
SqlDataAdapter sqlClearQuestDefects = new SqlDataAdapter(sql,
"Data Source=ab;Initial Catalog=ac;User ID=ad; Password =aa");
DataSet lPlanViewData = new DataSet();
sqlClearQuestDefects.Fill(lPlanViewData, "PlanViewData");
I am getting the timeout error at this line:
SqlDataAdapter sqlClearQuestDefects = new SqlDataAdapter(sql,
"Data Source=ab;Initial Catalog=ac;User ID=ad; Password =aa");
SqlDataAdapter adp = new SqlDataAdapter();
adp.SelectCommand.CommandTimeout = 0; // Set the Time out on the Command Object
You're trying to connect to a SQL Server, and it is taking longer than ADO.NET is willing to wait.
Try connecting to the same server, using the same username and password, using SQL Server Management Studio. If you get the same error, there is either something wrong with your connection string, the server you specify is not running, or you can't get to the server across the network from where you are (maybe you're on a public IP address trying to get in to an internal server name). I can't think of a scenario in which you'd enter the exact same server and credentials into SSMS and connect, then do the same in ADO.NET and fail.
If you're on a slow network, you can try increasing the timeout value. However, if a connection is going to happen at all, it should happen pretty quickly.
Take a look at both your SQL Native Client settings, and the SQL Server settings on the server. There is a section for allowed protocols; SQL can connect using a variety of protocols. Usually, you want TCP/IP for a server on the network, and Named Pipes for a server running on your own computer.
EDIT FROM YOUR COMMENT: Oh, that's normal; happens all the time. From time to time on a TCP network, packets "collide", or are "lost" in transmission. It's a known weakness of packet-switching technologies, which is managed by the TCP protocol itself in most cases. One case in which it isn't easily detected is when the initial request for a connection is lost in the shuffle. In that case, the server doesn't know there was a request, and the client didn't know their request wasn't received. So, all the client can do is give up.
To make your program more robust, all you have to do is expect a failure or two, and simply re-try your request. Here's a basic algorithm to do that:
SqlDataAdapter sqlClearQuestDefects;
short retries = 0;
while(true)
{
try
{
sqlClearQuestDefects = new SqlDataAdapter(sql, "Data Source=ab;Initial Catalog=ac;User ID=ad; Password =aa");
break;
}
catch(Exception)
{
retries++;
//will try a total of three times before giving up
if(retries >2) throw;
}
}
Since the exact command to increase connection time out wasn't mentioned in the other answers (of yet)- if you do determine a need to increase your connection time out, you would do so in your connection string as follows:
Data Source=ab;Initial Catalog=ac;User ID=ad; Password =aa; Connection Timeout=120
Where 120 = 120 seconds. Default is 20 or 30 as I recall.
This is probably a connection issue with your database, for example if you had the following connection string:
"Data Source=MyDatabaseServer...
Then you need to make sure that:
The machine MyDatabaseServer is connected to the network and is accessible from the machine you are running your application from (under the name "MyDatabaseServer")
The database server is running on MyDatabaseServer
The database server on MyDatabaseServer is configured to accept connections from remote machines
The firewall settings both on the local machine and MyDatabaseServer are correctly set up to allow SQL Server connections through
Your username / password etc... are correct
You can also try connecting to the given database instance using SQL Server Management Studio from the client machine as a diagnosis step.
There are plenty of articles that address SQL Server connectivity issues - do a Google search for the specific error message that comes up or failing that as a specific question on Server Fault
Faced this problem recently and found the resolution that worked for me.
By the way, setting Timeout = 0 helped to avoid the exception, but the execution time was unreasonable, while manual execution of the store procedure took a few seconds.
Bottom line:
I added SET IMPLICIT_TRANSACTIONS OFF to the stored procedure that is used to fill the data set.
From MSDN:
The SQL Server Native Client OLE DB Provider for SQL Server and the
SQL Server Native Client ODBC driver automatically set
IMPLICIT_TRANSACTIONS to OFF when connecting. SET
IMPLICIT_TRANSACTIONS defaults to OFF for connections with the
SQLClient managed provider, and for SOAP requests received through
HTTP endpoints.
[...]
When SET ANSI_DEFAULTS is ON, SET IMPLICIT_TRANSACTIONS is ON.
So I believe that in my case defaults weren't as required. (I couldn't check that. Don't have enough privileges on SQL server). But adding this line to my SP solved the problem.
IMPORTANT: In my case I didn't need the transaction, so I had no problem to cancel the implicit transaction setting. If in your case transaction is a must you, probably, shouldn't use this solution.

SQL server recovery mechanism

I have a service application in C# which queries data from one database and inserts it into another SQL database.
Sometimes, the MSSQLSERVER service crashed for unknown reason and my application will crash as well. I want to do a SQL recovery mechanism that where I check to make sure the sqlconnection state is fine before i write to the database but how i do that?
I tried stopping MSSQLSERVER service and sqlconnection.State is always open even when the MSSQLSERVER service is stopped.
First: Fix your real problem. SQL Server should be very, very stable.
Second: consider using MSMQ (or SQL Service Broker) on both the client application and server to queue updates.
The general strategy of checking the connection state before calling a SQL command fundamentally won't work. What happens if the service crashes after your connection check, but before you call the SQL command?
You probably will need to figure out what exception is thrown when the database is down and recover from that exception at the appropriate layer of code.
I think that the approach you chose is not very good.
If your application is some kind of scheduled job, let it crash. No database - no work can be done. This is ok to crash in this case. Next time it runs and db is up it will do its thing. You can also implement retries.
If your application is a windows service inside and some kind of scheduled timer, you just make sure that your service doesn't crash by handling SqlExcpetion. Retry again until server is up.
Also, you might want to use distributed transactions. To guarantee integrity of the copy procedure, but whether you need it or not, depends on the requirements.
[Edit] In response to retry question.
var attemptNumber = 0;
while (true)
{
try
{
using (var connection = new SqlConnection())
{
connection.Open();
// do the job
}
break;
}
catch (SqlException exception)
{
// log exception
attemptNumber++;
if (attemptNumber > 3)
throw; // let it crash
}
}

SQL Exception: "Impersonate Session Security Context" cannot be called in this batch because a simultaneous batch has called it

When opening a connection to SQL Server 2005 from our web app, we occasionally see this error:
"Impersonate Session Security Context" cannot be called in this batch because a simultaneous batch has called it.
We use MARS and connection pooling.
The exception originates from the following piece of code:
protected SqlConnection Open()
{
SqlConnection connection = new SqlConnection();
connection.ConnectionString = m_ConnectionString;
if (connection != null)
{
try
{
connection.Open();
if (m_ExecuteAsUserName != null)
{
string sql = Format("EXECUTE AS LOGIN = {0};", m_ExecuteAsUserName);
ExecuteCommand(connection, sql);
}
}
catch (Exception exception)
{
connection.Close();
connection = null;
}
}
return connection;
}
I found an MS Connect article which suggests that the error is caused when a previous command has not yet terminated before the EXECUTE AS LOGIN command is sent. Yet how can this be if the connection has only just been opened?
Could this be something to do with connection pooling interacting strangely with MARS?
UPDATE: For the short-term we have implemented a workaround by clearing out the connection pool whenever this happens, to get rid of the bad connection, as it otherwise keeps getting handed back to various users. (This now happens a 5-10 times a day with only a small number of simultaneous users, so it is fairly annoying.) But if anyone has any further ideas, we are still looking out for a real solution...
I would say it's MARS rather then pooling
From "Using Multiple Active Result Sets (MARS)"
Applications can have multiple default
result sets open and can interleave
reading from them.
Applications can
execute other statements (for example,
INSERT, UPDATE, DELETE, and stored
procedure calls) while default result
sets are open.
Connection pooling in it's basic form means the connection open/close overhead is minimised, but any connection (until MARS) has one thing going on at any one time. Pooling has been around for some time and just works out of the box.
MARS (I've not used it BTW) introduces overlapping "stuff" going on for any single connection. So it's probably MARS rather than connection pooling is the bigger culprit of the 2.
From "Extending Database Impersonation by Using EXECUTE AS"
When impersonating a principal by
using the EXECUTE AS LOGIN statement,
or within a server-scoped module by
using the EXECUTE AS clause, the scope
of the impersonation is server-wide.
This may explain why MARS is causing it: the same principal in 2 session both running EXECUTE AS.
There may be something in that article of use, or try this:
IF ORIGINAL_LOGIN() = SUSER_SNAME() EXECUTE AS LOGIN = {0};
On reflection and after reading for this answer, I've not convinced that trying to change execution context for each session (MARS) in one connections is a good idea...
Don't blame connection pooling - MARS is quite notorious for wreaking a havoc. It's not entirely it's blame but it's kind of half and half. The key thing to remember is that MARS is designed, and only works with "normal" DB use (meaning, regular CRUD stuff no admin batches). Any commands that have a wide effect on DB engine can trip MARS even if it's just one connection and single threaded (like running a setup batch to create tables or a nested transaction).
Having said that, one can easily just blame MARS, but it works perfecly fine for normal CRUD scenarios which are like 99% (and things with low efficiencey like ORM-s and LINQ depend on it for life). Meaning that it's important for people to learn that if they want to hack SQL through a connection they can't use MARS. For example I had a setup code that was creating whole DB from scratch, beceuse it's very convenient for deployment, but it was sharing connection sting with web service it was deploying - oops :-) Took me a few days of digging to learn my lesson. So I just maintain the separation of concerns (which is always good) and problems went away.
Have you tried to use a revert at the end of your sql statement?
http://msdn.microsoft.com/en-us/library/ms178632.aspx
I always do this to just make sure the current context is back to normal.

ASP.NET SqlConnection Timeout issue

I have run into a frustrating issue which I originally thought was a connection leak but that does not seem to be the case. The secnario is this: the data access for this application is using the Enterprise Libraries (v4) from Microsoft. All data access calls are wrapped in using statements such as
using (DbCommand dbCommand = db.GetStoredProcCommand("sproc"))
{
db.AddInParameter(dbCommand, "MaxReturn", DbType.Int32, MaxReturn);
...more code
}
Now the index of this application makes 8 calls to the database to load everything and I can bring the application to its knees by refreshing the index about 15 times. It seems that when the the database reaches 113 connections is when I recieve this error. Here is what makes this weird:
I have run similar code with the entlib on high traffic sites and have NEVER had this problem ever.
If I kill all the connections to the database and get the production application back up and running everytime I refresh the application I can run this SQL
SELECT DB_NAME(dbid) as 'Database Name',
COUNT(dbid) as 'Total Connections'
FROM sys.sysprocesses WITH (nolock)
WHERE dbid > 0
GROUP BY dbid
I can see the number of connections actively increasing with each page refresh. Running the same code on my local box with the same connection string does not cause this problem. Further if the production website is down I can fire up the site via Visual Studio and run it fine and the only difference between the two is that the production site has Windows authentication turned on and my local copy doesn't. Turning windows authentication off seems to have no effect on the server.
I have absolutely no clue what is causing this or why the connections are not being disposed of in SQL Server. The EntLib objects do no explose .Close() methods for anything so I can't explictily close the object.
Any thoughts?
Thanks!
Edit
Wow I just noticed that I never actually posted the error message. Oy. The actual connection error is: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
Check that the stored procedure you are executing is not running into a row or table lock. Also if you can possibly try to deploy in another server and check if the application would crawl again.
Also try to increase the maximum allowed connections for your SQL server.
think the “Timeout Expired” error is a general issue and may have seveal causes. Increasing the TimeOut can solve some of them but not all.
You may also refer to the following links to troubleshoot and fix the error
http://techielion.blogspot.com/2007/01/error-timeout-expired-timeout-period.html
Could it be a configuration issue on the server?
How do you make a connection to the database on the production server?
That might be an area worth looking into.
While I don't know the answer I can suggest that for some reason connections are not being closed by you application when run in production. (Stating the obvious)
You might want examine your network configuration between the web server and sql server. High latency networks can cause connections not being closed in time.
Also it might help looking at the performance counters listed in the end of the following msdn article:
http://msdn.microsoft.com/en-us/library/8xx3tyca%28VS.71%29.aspx
Finally, if nothing else helps, I'd get debugger and Enterprise Library source code on production and debug your code inside the enterprise library to find out why connections are not being closed.
Silly question are you properly closing your DataReader? If not this could be the problem and the difference in behaviour between dev and prod can be caused by different garbage collection patterns.
I would disable connection pooling and try to suppress it (heh). Just add ";Pooling=false" to your connection string.
Or, perhaps you could add something like the following 'cleanup' code to your page (which closes any connection left open when the page unloads) - right in the 'using' clause:
System.Web.UI.Page page = HttpContext.Current.Handler as System.Web.UI.Page;
if (page != null) {
page.Unload += (EventHandler)delegate(object s, EventArgs e) {
try {
dbCommand.Connection.Close();
} catch (Exception) {
} finally {
result = null;
}
};
}
Also, make sure you've enabled the 'shared memory' protocoll if your SQL server and IIS are on the same machine (a real performance booster)!

TIBCO EMS Failover reconnect for C# (TIBCO.EMS.dll)

We have a TIBCO EMS solution that uses built-in server failover in a 2-4 server environment. If the TIBCO admins fail-over services from one EMS server to another, connections are supposed to be transfered to the new server automatically at the EMS service level. For our C# applications using the EMS service, this is not happening - our user connections are not being transfered to the new server after failover and we're not sure why.
Our application connection to EMS at startup only so if the TIBCO admins failover after users have started our application, they users need to restart the app in order to reconnect to the new server (our EMS connection uses a server string including all 4 production EMS servers - if the first attempt fails, it moves to the next server in the string and tries again).
I'm looking for an automated approach that will attempt to reconnect to EMS periodically if it detects that the connection is dead but I'm not sure how best to do that.
Any ideas? We are using TIBCO.EMS.dll version 4.4.2 and .Net 2.x (SmartClient app)
Any help would be appreciated.
First off, yes, I am answering my own question. Its important to note, however, that without ajmastrean, I would be nowhere. thank you so much!
ONE:
ConnectionFactory.SetReconnAttemptCount, SetReconnAttemptDelay, SetReconnAttemptTimeout should be set appropriately. I think the default values re-try too quickly (on the order of 1/2 second between retries). Our EMS servers can take a long time to failover because of network storage, etc - so 5 retries at 1/2s intervals is nowhere near long enough.
TWO:
I believe its important to enable the client-server and server-client heartbeats. Wasn't able to verify but without those in place, the client might not get the notification that the server is offline or switching in failover mode. This, of course, is a server side setting for EMS.
THREE:
you can watch for failover event by setting Tibems.SetExceptionOnFTSwitch(true); and then wiring up a exception event handler. When in a single-server environment, you will see a "Connection has been terminated" message. However, if you are in a fault-tolerant multi-server environment, you will see this: "Connection has performed fault-tolerant switch to ". You don't strictly need this notification, but it can be useful (especially in testing).
FOUR:
Apparently not clear in the EMS documentation, connection reconnect will NOT work in a single-server environment. You need to be in a multi-server, fault tolerant environment. There is a trick, however. You can put the same server in the connection list twice - strange I know, but it works and it enables the built-in reconnect logic to work.
some code:
private void initEMS()
{
Tibems.SetExceptionOnFTSwitch(true);
_ConnectionFactory = new TIBCO.EMS.TopicConnectionFactory(<server>);
_ConnectionFactory.SetReconnAttemptCount(30); // 30retries
_ConnectionFactory.SetReconnAttemptDelay(120000); // 2minutes
_ConnectionFactory.SetReconnAttemptTimeout(2000); // 2seconds
_Connection = _ConnectionFactory.CreateTopicConnectionM(<username>, <password>);
_Connection.ExceptionHandler += new EMSExceptionHandler(_Connection_ExceptionHandler);
}
private void _Connection_ExceptionHandler(object sender, EMSExceptionEventArgs args)
{
EMSException e = args.Exception;
// args.Exception = "Connection has been terminated" -- single server failure
// args.Exception = "Connection has performed fault-tolerant switch to <server url>" -- fault-tolerant multi-server
MessageBox.Show(e.ToString());
}
This post should sum up my current comments and explain my approach in more detail...
The TIBCO 'ConnectionFactory' and 'Connection' types are heavyweight, thread-safe types. TIBCO suggests that you maintain the use of one ConnectionFactory (per server configured factory) and one Connection per factory.
The server also appears to be responsible for in-place 'Connection' failover and re-connection, so let's confirm it's doing its job and then lean on that feature.
Creating a client side solution is going to be slightly more involved than fixing a server or client setup problem. All sessions you have created from a failed connection need to be re-created (not to mention producers, consumers, and destinations). There are no "reconnect" or "refresh" methods on either type. The sessions do not maintain a reference to their parent connection either.
You will have to manage a lookup of connection/session objects and go nuts re-initializing everyone! or implement some sort of session failure event handler that can get the new connection and reconnect them.
So, for now, let's dig in and see if the client is setup to receive failover notification (tib ems users guide pg 292). And make sure the raised exception is caught, contains the failover URL, and is being handled properly.
Client applications may receive notification of a failover by setting the tibco.tibjms.ft.switch.exception system property
Perhaps the library needs that to work?

Categories

Resources