AseConnection.Open() throws AccessViolationException - c#

A line of code that's been working for as long as I can remember has suddenly stopped working and it's now throwing an AccessViolationException:
Exception:
System.AccessViolationException was unhandled
Message=Attempted to read or write protected memory. This is often an indication that
other memory is corrupt.
Source=Sybase.Data.AseClient
StackTrace:
at Sybase.Data.AseClient.AseConnectionPool.ᜀ(AseConnection A_0)
at Sybase.Data.AseClient.AseConnectionPoolManager.ᜀ(String A_0, AseConnection A_1)
at Sybase.Data.AseClient.AseConnection.Open()
InnerException:
Code:
using (var connection = new AseConnection(this.ConnectionString))
{
using (var command = connection.CreateCommand())
{
command.CommandText = "select * from TABLE_NAME";
command.Connection.Open();
...
I've rebooted my machine, checked for recent Windows Updates, ran CHKDSK, uninstalled and re-installed Sybase but nothing seems to work!
I'm targeting a Sybase 12.5.4 database using a 64bit Sybase 12.5.4 client with Sybase.Data.AseClient.dll referenced in my code (same version as Production code - which is working without issue) and sybdrvado11.dll is available when the application's running. Literally nothing has changed since it was last working.
Using Toad, I'm still able to connect and interact with the database as well, so it looks like this issue is only affecting my code.
Has anyone experienced this issue before?

Realise this is an ancient issue, but thought I'd share some insights I've had in this area.
There is a persistent underlying issue with the SAP/Sybase AseClient where, when connection pooling is enabled, it will attempt to get a connection from the pool. If none are available, it will attempt to create a new connection, unless there are already Max Pool Size connections in the pool.
In this case, instead of waiting, it will try to create a connection that overruns the bounds of the connection pool, overwriting protected memory and producing the error you have experienced above.
This issue exists even today.
Although the root cause in your case was a permissions issue, the AccessViolationException was caused by the connection pool - as can be seen from your stack trace.
We worked around it in some cases by disabling connection pooling - which crippled performance, and in other cases by setting a Max Pool Size=1000 which hid it unless the ASE server was experiencing degradation sufficient to tie up 1000 connections. Neither approach is particularly satisfactory.
That instability was one of the motivations behind an alternative AseClient that we wrote and open sourced that also supports .NET Core.

Turned out to be a database permissions issue relating to group membership... I was removed the problem group and everything came to life again.

Related

Change CommandTimeout Default Value globally

We migrated some piece of old software to a new server. We used SQL Server 2008 Enterprise in the past and now we are using SQL Server 2014 Enterprise on a new machine, so it should be faster now.
The old software is legacy software and about to expire, therefore I don't want to put much effort in fixing it. But for some reason there is a C# function running a SQL query against the database for which I get the error message
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
All I read about that is, that I have to extend the timeout time by using CommandTimeout. But unfortunately everything runs under "context connection = true". Therefore, it would take quite a bit work to rebuild this function with an opportunity to change the timeout.
And I'm asking myself, why did this run on the old machine and way it won't on the new one. So it has to do something about the new machine or the new SQL Server engine. Is there any way to change the standard timeout of 30 seconds for a command in the .NET Framework or in the SQL Server?
Thanks a lot for any suggestions!
You can set the timeout of a command with the CommandTimeout property:
var cmd = new SqlCommand { CommandTimeout = 60 }
Ok, I didn't find a sollution for the problem, yet, but the timeout is not really the source of the problem. I gained access to the old system and run some tests and it turned out that the same function on the old machine with the old server software runs a lot faster, such that there is no timeout.
Hence, I have to focus on server speed and database tuning.
Thanks to everyone who occupied himself with this question!
Edit:
I found a solution to my problem, indirectly. I couldn't find out, why the execution of the statement on the new machine takes so long. But it turned out that the statement itselft uses table variables. I changed them to a local temporary table in the database tempdb. Now the execution takes less than one second instead of more than 7 minutes!
For me, it looks like a problem with some cache or a miss-configured SQL server. Unfortunately, I'm not really the server administrator and I will not twiddle with it. But I will mention it to the administrators. At least, the program runs now perfectly.

What is limiting requests count? Why `Timeout while getting a connection from pool`?

At high load conditions NHibernate sometimes throws an exception when BeginTransaction is called. The message contains Timeout while getting a connection from pool in the RequestConnector method of Npgsql.
In the pg_log: could not receive data from client: No connection could be made because the target machine actively refused it.
Postgres stats doesn't show any expensive queries.
The machine have enough free cpu and ram resources.
Versions: Postgres 9.4.0 64-bit, NHibernate 3.3.1.4000, Npgsql 2.2.3.
Postgres settings:
shared_buffers = 128MB
max_connections = 300
checkpoint_segments = 6
Connection string settings:
Pooling = true;
MINPOOLSIZE=20;
MAXPOOLSIZE=1000;
Postgres and the application are located on the same machine.
All NHibernate transactions and sessions are disposed with using.
This problem was caused by disk bottleneck. With SSD it works much better.
One problem that I have seen in the past is the maximum number of sockets that can be opened at the same time and the linger time from the time that a socket is closed till it is freed. Under huge volumes this has become problematic. Here are a couple of links that discuss this problem Link 1 Link 2
We have noticed similar problem, I found at Npgsql github, that they have changed DNS resolving from sync to async in version 2.1 and it leads to this error.
Till today (ver. 2.2.4.3) it is not fixed.
Here is a fix (revert):
Npgsql fork - commit

IBM WebSphere XMS.Net CWSMQ0082E error

On several occasions I have received the following error from a .Net (C#, 4.0) application out of the blue on sending a message thru a producer:
CWSMQ0082E: Failed to send to CompCode: 2, Reason: 2009. A problem was encountered whilst sending a message. See the linked exception for more information.
Of course, the LinkedException (why not use the InnerException IBM???) is null i.e. no more information available.
Code I'm using (pretty straightforward):
var m = _session.CreateBytesMessage();
m.WriteBytes(mybytearray);
m.JMSReplyTo = myreplytoqueue;
m.SetIntProperty(XMSC.JMS_IBM_MSGTYPE, MQC.MQMT_DATAGRAM);
m.SetIntProperty(XMSC.JMS_IBM_REPORT_COA, MQC.MQRO_COD);
m.SetIntProperty(XMSC.JMS_IBM_REPORT_COD, MQC.MQRO_COA);
myproducer.Send(m, DeliveryMode.Persistent, mypriority, myttl);
(Offtopic: I hate the SetIntProperty way of setting properties. Which <expletive deleted> came up with that idea? It takes ages to look up all sorts of constants all over the place and its allowed values.)
The exception is thrown on the .Send method. I'm using XMS.Net (IA9H / 2.0.0.7). The only Google result that turns up turns out to have a different reason code (and even if it were the same, it should be fixed in my version if I understand correctly). This occurs randomly (though it seems to happen more often when it's been a while since a message has been sent/received) and I have no way to reproduce this.
I have ab-so-lute-ly no idea how to troubleshoot this or even where to start looking. Is this something caused by the server-side? Is it caused by XMS.net or some underlying IBM WebSphere MQ infrastructure?
Some results that I found that seem similar are suggesting to set SHARECNV to any value higher than 0 or to "true" / "yes" but the documentation explicitly tells me the default is 10. Also; I have no idea if this is the cause so changing it to another value feels like a shotgun approach.
Anybody any idea on how to go about solving this? I could of course just catch the exception, tear everything (channels, sessions, whatever) down and restart but that's just plain ugly IMHO.
The 2009 return code means "Connection Broken." Basically, the underlying TCP socket is gone and the client finds out about it at the time of the API call. It is possible to tune the channels using heartbeat and keepalive so that WMQ tries harde to keep the socket alive. However if the socket is timed out by the underlying infrastructure, nothing WMQ can do will help. Examples we've seen are that firewalls and load balancers are often set to detect idle connections and sever them.
Modern versions of WMQ client will attempt to reconnect transparently. The application just blocks a bit longer when this occurs.
Short of using the automatic reconnect, the only solution is in fact to rebuild the connection. Since it will get a new connection handle, all the object handles must be rebuilt as well.
Many of the tuning functions described here are available through the client configuration file, available in v7.0 and greater clients. In particular, the TCP stanza of that file enables keepalive. (The TCP spec says that if keepalive is provided, it must be disabled by default.) The QMgr has a similar ini file with configuration stanzas, including one for keepalive. The latest WMQ client is available as SupportPac MQC71 if you need that.
In cases where the main exception is sufficient enough to indicate the error, the inner exception will be null. In your case it's MQ reason code 2009 which means a connection to queue manager has been broken. The socket through which your application and queue manager were communicating was closed for some reason. The reason for socket close could be a network blip.
Along with suggestions T.Rob noted above, You could also run a XMS and Queue manager trace to understand the problem further. Please see the Troubleshooting chapter in XMS InfoCenter.
HTH

ASP.NET SqlConnection Timeout issue

I have run into a frustrating issue which I originally thought was a connection leak but that does not seem to be the case. The secnario is this: the data access for this application is using the Enterprise Libraries (v4) from Microsoft. All data access calls are wrapped in using statements such as
using (DbCommand dbCommand = db.GetStoredProcCommand("sproc"))
{
db.AddInParameter(dbCommand, "MaxReturn", DbType.Int32, MaxReturn);
...more code
}
Now the index of this application makes 8 calls to the database to load everything and I can bring the application to its knees by refreshing the index about 15 times. It seems that when the the database reaches 113 connections is when I recieve this error. Here is what makes this weird:
I have run similar code with the entlib on high traffic sites and have NEVER had this problem ever.
If I kill all the connections to the database and get the production application back up and running everytime I refresh the application I can run this SQL
SELECT DB_NAME(dbid) as 'Database Name',
COUNT(dbid) as 'Total Connections'
FROM sys.sysprocesses WITH (nolock)
WHERE dbid > 0
GROUP BY dbid
I can see the number of connections actively increasing with each page refresh. Running the same code on my local box with the same connection string does not cause this problem. Further if the production website is down I can fire up the site via Visual Studio and run it fine and the only difference between the two is that the production site has Windows authentication turned on and my local copy doesn't. Turning windows authentication off seems to have no effect on the server.
I have absolutely no clue what is causing this or why the connections are not being disposed of in SQL Server. The EntLib objects do no explose .Close() methods for anything so I can't explictily close the object.
Any thoughts?
Thanks!
Edit
Wow I just noticed that I never actually posted the error message. Oy. The actual connection error is: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
Check that the stored procedure you are executing is not running into a row or table lock. Also if you can possibly try to deploy in another server and check if the application would crawl again.
Also try to increase the maximum allowed connections for your SQL server.
think the “Timeout Expired” error is a general issue and may have seveal causes. Increasing the TimeOut can solve some of them but not all.
You may also refer to the following links to troubleshoot and fix the error
http://techielion.blogspot.com/2007/01/error-timeout-expired-timeout-period.html
Could it be a configuration issue on the server?
How do you make a connection to the database on the production server?
That might be an area worth looking into.
While I don't know the answer I can suggest that for some reason connections are not being closed by you application when run in production. (Stating the obvious)
You might want examine your network configuration between the web server and sql server. High latency networks can cause connections not being closed in time.
Also it might help looking at the performance counters listed in the end of the following msdn article:
http://msdn.microsoft.com/en-us/library/8xx3tyca%28VS.71%29.aspx
Finally, if nothing else helps, I'd get debugger and Enterprise Library source code on production and debug your code inside the enterprise library to find out why connections are not being closed.
Silly question are you properly closing your DataReader? If not this could be the problem and the difference in behaviour between dev and prod can be caused by different garbage collection patterns.
I would disable connection pooling and try to suppress it (heh). Just add ";Pooling=false" to your connection string.
Or, perhaps you could add something like the following 'cleanup' code to your page (which closes any connection left open when the page unloads) - right in the 'using' clause:
System.Web.UI.Page page = HttpContext.Current.Handler as System.Web.UI.Page;
if (page != null) {
page.Unload += (EventHandler)delegate(object s, EventArgs e) {
try {
dbCommand.Connection.Close();
} catch (Exception) {
} finally {
result = null;
}
};
}
Also, make sure you've enabled the 'shared memory' protocoll if your SQL server and IIS are on the same machine (a real performance booster)!

Running out of DB connections using LINQ to SQL

In developing a relatively simple web service, that takes the data provided by a post and records it in a database table, we're getting this error:
Exception caught: The remote server returned an error: (500) Internal Server Er
or.
Stack trace: at System.Net.HttpWebRequest.GetResponse()
on some servers, but no others. The ones that are getting this are the physical machines, the others are virtual, and obviously the physical servers are far more powerful.
As far as we can tell, the problem is that the DB connections aren't being released back to the pools after each query. I'm using the using pattern below:
using (VoteDaoDataContext dao = new VoteDaoDataContext())
{
dao.insert_response_and_update_count(answerVal, swid, agent, geo, DateTime.Now, ip);
dao.SubmitChanges();
msg += "Thank you for your vote.";
dao.Dispose();
}
I added the dao.Dispose() call to ensure that connections are released when the method finishes, but I don't know whether or not it's necessary.
Am I using this pattern correctly? Is there something else I need to do to ensure that connections get returned to the pools correctly?
Thanks!
Your diagnostic information is not good enough. An HTTP/500 isn't enough detail to really tell if your theory is correct. You're going to need to capture a full stack trace in your logging if you want to get to the problem. I think you've jumped to a conclusion here. And no, you do not need that Dispose() before the end of your using{} block. That's what using{} does.
I thought that dispose() call was redundant, but I wanted to be sure.
We're seeing the connection pools saturating in the SQL logs (I can't look at the directly, I'm just a developer, and this stuff's running in a prod environment), and my ops guy said he's seeing connections timing out... and once they time out, the server starts running again, until the next time it saturates the connection pool.
We're going through the process of tweaking the connection pool settings at the moment... I wanted to be certain that I wasn't doing anything wrong, since this is my first time using Linq.
Thanks!

Categories

Resources