Change CommandTimeout Default Value globally - c#

We migrated some piece of old software to a new server. We used SQL Server 2008 Enterprise in the past and now we are using SQL Server 2014 Enterprise on a new machine, so it should be faster now.
The old software is legacy software and about to expire, therefore I don't want to put much effort in fixing it. But for some reason there is a C# function running a SQL query against the database for which I get the error message
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
All I read about that is, that I have to extend the timeout time by using CommandTimeout. But unfortunately everything runs under "context connection = true". Therefore, it would take quite a bit work to rebuild this function with an opportunity to change the timeout.
And I'm asking myself, why did this run on the old machine and way it won't on the new one. So it has to do something about the new machine or the new SQL Server engine. Is there any way to change the standard timeout of 30 seconds for a command in the .NET Framework or in the SQL Server?
Thanks a lot for any suggestions!

You can set the timeout of a command with the CommandTimeout property:
var cmd = new SqlCommand { CommandTimeout = 60 }

Ok, I didn't find a sollution for the problem, yet, but the timeout is not really the source of the problem. I gained access to the old system and run some tests and it turned out that the same function on the old machine with the old server software runs a lot faster, such that there is no timeout.
Hence, I have to focus on server speed and database tuning.
Thanks to everyone who occupied himself with this question!
Edit:
I found a solution to my problem, indirectly. I couldn't find out, why the execution of the statement on the new machine takes so long. But it turned out that the statement itselft uses table variables. I changed them to a local temporary table in the database tempdb. Now the execution takes less than one second instead of more than 7 minutes!
For me, it looks like a problem with some cache or a miss-configured SQL server. Unfortunately, I'm not really the server administrator and I will not twiddle with it. But I will mention it to the administrators. At least, the program runs now perfectly.

Related

About action of the open Method(MySQL, Connector/Net)

First of all, I have been started to studying the C# from 6 months ago and I'm not good at English.
So I'm sorry if I say something that you cannot understand.
Now, I'm developing the application with C# that ONLY connect to MySQL server(both of remote and local).
And I use a MySql.Data.Client package, it's version 8.0.12.
This ONLY means "I don't send any queries. I want to find that I can connect to server using a UserName and Password.".
Then, I wrote this code. And I have two questions, and I want to get any advises.
sendCommand = string.Format("host={0}; userid={1}; password={2}; SslMode=none;", pIPAddress.ToString(), mOption1, mOption2);
MySqlConnection mysqlConnention = new MySqlConnection(sendCommand);
mysqlConnention.Open();
if (mysqlConnention.State.ToString() == "Open")
{
result = true;
}
mysqlConnention.Close();
Q1
The application should not send unnecessary packets to server.
However I found it from WireShark and it's "show variable" command.
I tried to change some properties about ConnectionString(in this code, it's called "send Command") because I want not to send "show variable" command. (for example, ChacheServerProperties=true, AllowBatch=false, and so on...)
Can I connect without "show variable" command?
Q2
(It's soloved but I cannot find the causion)
When I started developing this application, I used MySql.Data.Client 8.0.11.
But in that time, the MySqlConnection.Open Method was taken a long time for connection to server.(It was about 7~10 second.)
The network problems was none, and I was able to connect using cmd.exe without a long time.(It was not taken a second.)
I tried any patterns, changed server to connect, restarted OS(both of the application side and server side), changed ConnectionString properties).
MySQL server was not dying and It looked like the cause is not in MySQL server and network.(Getting the application "ServerGreeting" packet from server was no time but next action that the application send "LoginRequest" packet to server was taken a long time.)
However I was not able to slove this problem. But I updeted MySql.Data.Client version to 8.0.12, I can slove!
Is this problem caused by MySql.Data.Client 8.0.11?
And have I any solution?
The application should not send unnecessary packets to server. … Can I connect without "show variable" command?
You can, by using an alternative ADO.NET library: MySqlConnector. It sends a lot fewer packets when opening the connection (but it may not send the absolute minimum possible).
But in that time, the MySqlConnection.Open Method was taken a long time for connection to server.(It was about 7~10 second.) … Is this problem caused by MySql.Data.Client 8.0.11?
This sounds very much like MySQL bug 80030, which is a bug in 8.0.11. That case says it's going to be fixed in 8.0.13, not 8.0.12, but perhaps Oracle changed their release plans after making that comment.
This could also be fixed by using MySqlConnector, which never used an inefficient WMI query that was causing the performance problem in MySql.Data.

What is limiting requests count? Why `Timeout while getting a connection from pool`?

At high load conditions NHibernate sometimes throws an exception when BeginTransaction is called. The message contains Timeout while getting a connection from pool in the RequestConnector method of Npgsql.
In the pg_log: could not receive data from client: No connection could be made because the target machine actively refused it.
Postgres stats doesn't show any expensive queries.
The machine have enough free cpu and ram resources.
Versions: Postgres 9.4.0 64-bit, NHibernate 3.3.1.4000, Npgsql 2.2.3.
Postgres settings:
shared_buffers = 128MB
max_connections = 300
checkpoint_segments = 6
Connection string settings:
Pooling = true;
MINPOOLSIZE=20;
MAXPOOLSIZE=1000;
Postgres and the application are located on the same machine.
All NHibernate transactions and sessions are disposed with using.
This problem was caused by disk bottleneck. With SSD it works much better.
One problem that I have seen in the past is the maximum number of sockets that can be opened at the same time and the linger time from the time that a socket is closed till it is freed. Under huge volumes this has become problematic. Here are a couple of links that discuss this problem Link 1 Link 2
We have noticed similar problem, I found at Npgsql github, that they have changed DNS resolving from sync to async in version 2.1 and it leads to this error.
Till today (ver. 2.2.4.3) it is not fixed.
Here is a fix (revert):
Npgsql fork - commit

Stored procedure returns "Timeout expired"

In my Windows application, I use SQL Server 2008. My database size is 5086080 KB. Now I get the error as timeout expired when saving a transaction by using a stored procedure. So I set command timeout to 1200. It works fine. But I think it shouldn't because insert data have 2 or 3 lines. Is there any other way to solve this problem?
This is detail error message:
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding .
Timeout is entirely on how long the actual SQL command is likely to take.
For example example, most of our commands run sproc's that should take no longer than 30 seconds to complete, however there are a couple that run for much longer, which means they have their own high valued timeouts.
You'll need to profile how long on average your routine takes then adjust the timeout accordingly, and remember to leave room for variables like latency etc
you need to profile your sql query and your code at every step. only then will you be able to know the exact bottleneck in your program.
Is somebody else keeping a transaction open that is holding up your query? Run sp_who or sp_who2 on the server to see what else is running.

Silverlight 4, Ria Services, HttpRequestTimedOutWithoutDetail

I have a database that is being accessed by a Silverlight application. It has an Error_Log in that same database.
I have hundreds of HttpRequestTimedOutWithoutDetail errors in the Error_Log table. I have set the timeout in the web.config to over a minute. I often receive the error if I call a query twice in a row.
I've decreased the volume by checking context first, but they still happen often. At first I thought it was a server load issue, but then I turned up my SQL Server 2008 instance to 3 Gigs of RAM, and I still get it with almost no users.
Can someone please help me understand why these errors happen when seemingly there is no reason to timeout? Does it have to do with multiple queries being sent at the same time? Or does it have to do with sending off queries that all hit the same database context?
EDIT:
I'm thinking this might be a connection pooling issue? I have it turned on, but maybe the connections aren't getting closed properly?
((WebDomainClient<RealFormsContext.IRealFormsServiceContract>)Context.DomainClient)
.ChannelFactory.Endpoint.Binding.OpenTimeout = new TimeSpan(0, 10, 0);
That got rid of my Timeout errors.

ASP.NET SqlConnection Timeout issue

I have run into a frustrating issue which I originally thought was a connection leak but that does not seem to be the case. The secnario is this: the data access for this application is using the Enterprise Libraries (v4) from Microsoft. All data access calls are wrapped in using statements such as
using (DbCommand dbCommand = db.GetStoredProcCommand("sproc"))
{
db.AddInParameter(dbCommand, "MaxReturn", DbType.Int32, MaxReturn);
...more code
}
Now the index of this application makes 8 calls to the database to load everything and I can bring the application to its knees by refreshing the index about 15 times. It seems that when the the database reaches 113 connections is when I recieve this error. Here is what makes this weird:
I have run similar code with the entlib on high traffic sites and have NEVER had this problem ever.
If I kill all the connections to the database and get the production application back up and running everytime I refresh the application I can run this SQL
SELECT DB_NAME(dbid) as 'Database Name',
COUNT(dbid) as 'Total Connections'
FROM sys.sysprocesses WITH (nolock)
WHERE dbid > 0
GROUP BY dbid
I can see the number of connections actively increasing with each page refresh. Running the same code on my local box with the same connection string does not cause this problem. Further if the production website is down I can fire up the site via Visual Studio and run it fine and the only difference between the two is that the production site has Windows authentication turned on and my local copy doesn't. Turning windows authentication off seems to have no effect on the server.
I have absolutely no clue what is causing this or why the connections are not being disposed of in SQL Server. The EntLib objects do no explose .Close() methods for anything so I can't explictily close the object.
Any thoughts?
Thanks!
Edit
Wow I just noticed that I never actually posted the error message. Oy. The actual connection error is: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
Check that the stored procedure you are executing is not running into a row or table lock. Also if you can possibly try to deploy in another server and check if the application would crawl again.
Also try to increase the maximum allowed connections for your SQL server.
think the “Timeout Expired” error is a general issue and may have seveal causes. Increasing the TimeOut can solve some of them but not all.
You may also refer to the following links to troubleshoot and fix the error
http://techielion.blogspot.com/2007/01/error-timeout-expired-timeout-period.html
Could it be a configuration issue on the server?
How do you make a connection to the database on the production server?
That might be an area worth looking into.
While I don't know the answer I can suggest that for some reason connections are not being closed by you application when run in production. (Stating the obvious)
You might want examine your network configuration between the web server and sql server. High latency networks can cause connections not being closed in time.
Also it might help looking at the performance counters listed in the end of the following msdn article:
http://msdn.microsoft.com/en-us/library/8xx3tyca%28VS.71%29.aspx
Finally, if nothing else helps, I'd get debugger and Enterprise Library source code on production and debug your code inside the enterprise library to find out why connections are not being closed.
Silly question are you properly closing your DataReader? If not this could be the problem and the difference in behaviour between dev and prod can be caused by different garbage collection patterns.
I would disable connection pooling and try to suppress it (heh). Just add ";Pooling=false" to your connection string.
Or, perhaps you could add something like the following 'cleanup' code to your page (which closes any connection left open when the page unloads) - right in the 'using' clause:
System.Web.UI.Page page = HttpContext.Current.Handler as System.Web.UI.Page;
if (page != null) {
page.Unload += (EventHandler)delegate(object s, EventArgs e) {
try {
dbCommand.Connection.Close();
} catch (Exception) {
} finally {
result = null;
}
};
}
Also, make sure you've enabled the 'shared memory' protocoll if your SQL server and IIS are on the same machine (a real performance booster)!

Categories

Resources