I have a database that is being accessed by a Silverlight application. It has an Error_Log in that same database.
I have hundreds of HttpRequestTimedOutWithoutDetail errors in the Error_Log table. I have set the timeout in the web.config to over a minute. I often receive the error if I call a query twice in a row.
I've decreased the volume by checking context first, but they still happen often. At first I thought it was a server load issue, but then I turned up my SQL Server 2008 instance to 3 Gigs of RAM, and I still get it with almost no users.
Can someone please help me understand why these errors happen when seemingly there is no reason to timeout? Does it have to do with multiple queries being sent at the same time? Or does it have to do with sending off queries that all hit the same database context?
EDIT:
I'm thinking this might be a connection pooling issue? I have it turned on, but maybe the connections aren't getting closed properly?
((WebDomainClient<RealFormsContext.IRealFormsServiceContract>)Context.DomainClient)
.ChannelFactory.Endpoint.Binding.OpenTimeout = new TimeSpan(0, 10, 0);
That got rid of my Timeout errors.
Related
We migrated some piece of old software to a new server. We used SQL Server 2008 Enterprise in the past and now we are using SQL Server 2014 Enterprise on a new machine, so it should be faster now.
The old software is legacy software and about to expire, therefore I don't want to put much effort in fixing it. But for some reason there is a C# function running a SQL query against the database for which I get the error message
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
All I read about that is, that I have to extend the timeout time by using CommandTimeout. But unfortunately everything runs under "context connection = true". Therefore, it would take quite a bit work to rebuild this function with an opportunity to change the timeout.
And I'm asking myself, why did this run on the old machine and way it won't on the new one. So it has to do something about the new machine or the new SQL Server engine. Is there any way to change the standard timeout of 30 seconds for a command in the .NET Framework or in the SQL Server?
Thanks a lot for any suggestions!
You can set the timeout of a command with the CommandTimeout property:
var cmd = new SqlCommand { CommandTimeout = 60 }
Ok, I didn't find a sollution for the problem, yet, but the timeout is not really the source of the problem. I gained access to the old system and run some tests and it turned out that the same function on the old machine with the old server software runs a lot faster, such that there is no timeout.
Hence, I have to focus on server speed and database tuning.
Thanks to everyone who occupied himself with this question!
Edit:
I found a solution to my problem, indirectly. I couldn't find out, why the execution of the statement on the new machine takes so long. But it turned out that the statement itselft uses table variables. I changed them to a local temporary table in the database tempdb. Now the execution takes less than one second instead of more than 7 minutes!
For me, it looks like a problem with some cache or a miss-configured SQL server. Unfortunately, I'm not really the server administrator and I will not twiddle with it. But I will mention it to the administrators. At least, the program runs now perfectly.
I have created a windows service using c#.NET, The service will updated oracle tables whenever it receives new files. I have kept timer control and the time limit as 30 seconds. I am using ODP.NET as data access layer.
The very first time I will get error, but subsequently the service will work fine. If service is Idle for a long time if it receives a file, I will get "connection lost error", but after if we receives file it will loaded successfully.
Kindly suggest me do I need add any properties in connection string to fix this error?
Hello Karthik Two issues here it seems.
You are best to open and close a new connection each time your service is called.
Windows services quickly go to a latent state if not called and they will respond slower on the next call. If the caller does not have a sufficient timeout value to accomodate this lag then it will return a time out error. If you address these two points you should be fine.
I have run into a frustrating issue which I originally thought was a connection leak but that does not seem to be the case. The secnario is this: the data access for this application is using the Enterprise Libraries (v4) from Microsoft. All data access calls are wrapped in using statements such as
using (DbCommand dbCommand = db.GetStoredProcCommand("sproc"))
{
db.AddInParameter(dbCommand, "MaxReturn", DbType.Int32, MaxReturn);
...more code
}
Now the index of this application makes 8 calls to the database to load everything and I can bring the application to its knees by refreshing the index about 15 times. It seems that when the the database reaches 113 connections is when I recieve this error. Here is what makes this weird:
I have run similar code with the entlib on high traffic sites and have NEVER had this problem ever.
If I kill all the connections to the database and get the production application back up and running everytime I refresh the application I can run this SQL
SELECT DB_NAME(dbid) as 'Database Name',
COUNT(dbid) as 'Total Connections'
FROM sys.sysprocesses WITH (nolock)
WHERE dbid > 0
GROUP BY dbid
I can see the number of connections actively increasing with each page refresh. Running the same code on my local box with the same connection string does not cause this problem. Further if the production website is down I can fire up the site via Visual Studio and run it fine and the only difference between the two is that the production site has Windows authentication turned on and my local copy doesn't. Turning windows authentication off seems to have no effect on the server.
I have absolutely no clue what is causing this or why the connections are not being disposed of in SQL Server. The EntLib objects do no explose .Close() methods for anything so I can't explictily close the object.
Any thoughts?
Thanks!
Edit
Wow I just noticed that I never actually posted the error message. Oy. The actual connection error is: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
Check that the stored procedure you are executing is not running into a row or table lock. Also if you can possibly try to deploy in another server and check if the application would crawl again.
Also try to increase the maximum allowed connections for your SQL server.
think the “Timeout Expired” error is a general issue and may have seveal causes. Increasing the TimeOut can solve some of them but not all.
You may also refer to the following links to troubleshoot and fix the error
http://techielion.blogspot.com/2007/01/error-timeout-expired-timeout-period.html
Could it be a configuration issue on the server?
How do you make a connection to the database on the production server?
That might be an area worth looking into.
While I don't know the answer I can suggest that for some reason connections are not being closed by you application when run in production. (Stating the obvious)
You might want examine your network configuration between the web server and sql server. High latency networks can cause connections not being closed in time.
Also it might help looking at the performance counters listed in the end of the following msdn article:
http://msdn.microsoft.com/en-us/library/8xx3tyca%28VS.71%29.aspx
Finally, if nothing else helps, I'd get debugger and Enterprise Library source code on production and debug your code inside the enterprise library to find out why connections are not being closed.
Silly question are you properly closing your DataReader? If not this could be the problem and the difference in behaviour between dev and prod can be caused by different garbage collection patterns.
I would disable connection pooling and try to suppress it (heh). Just add ";Pooling=false" to your connection string.
Or, perhaps you could add something like the following 'cleanup' code to your page (which closes any connection left open when the page unloads) - right in the 'using' clause:
System.Web.UI.Page page = HttpContext.Current.Handler as System.Web.UI.Page;
if (page != null) {
page.Unload += (EventHandler)delegate(object s, EventArgs e) {
try {
dbCommand.Connection.Close();
} catch (Exception) {
} finally {
result = null;
}
};
}
Also, make sure you've enabled the 'shared memory' protocoll if your SQL server and IIS are on the same machine (a real performance booster)!
I'm building a site that runs fine for a few hours, but then *.asmx and *.ashx calls start timing out.
The exception is: "Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool This may have occurred because all pooled connections were in use and max pool size was reached."
I'm using SubSonic as the ORM.
I suspect that the problem is based on a scheduled task that runs every few minutes and hits the database. When I look in SQL Server 2000's "Current Activity", I see there are:
100 processes with the status "sleeping"
100 locks
The 100 processes are from the Application ".Net SqlClient Data Provider" and the command is "AWAITING COMMAND".
So I'm guessing that's the issue . . but how do I troubleshoot it? Does this sound like a deadlock condition in the db? As soon as I
c:\> iisrestart
, everything's fine (for a while).
Thanks - I've just never encountered something like this and am not sure the best way to proceed.
Michael
It could be a duplicate of this problem - Is connection pooling working correctly in Subsonic?
If you're loading objects with Load() instead of LoadAndCloseReader(), each connection will be left open and eventually you'll exhaust the connection pool.
When you call Load() on a collection it will leave the Reader open - make sure you call LoadAndCloseReader() if you want the reader to close off - or use a using block.
It helps to have some source code as well.
I don't know anything about Subsonic, but maybe you are leaking database 'contexts'? I'd check that any database resource is being disposed after you're finished with it...
I am using JMeter to test our application 's performance. but I found when I send 20 requests from JMeter, with this the reason result should be add 20 new records into the sql server, but I just find 5 new records, which means that SQL server discard the other requests(because I took a log, and make sure that the insert new records are sent out to sql server.)
Do anyone have ideas ? What's the threshold number of request can SQL server handle per second ? Or do i need to do some configuration ?
Yeah, in my application, I tried, but it seems that only 5 requests are accepted, I don't know how to config , then it can accept more.
I'm not convinced the nr of requests per seconds are directly releated to SQL server throwing away your inserts. Perhaps there's an application logic error that rolls back or fails to commit the inserts. Or the application fails to handle concurrency and inserts data violating the constraints. I'd check the server logs for deadlocks as well.
Use either SQL Profiler or the LINQ data context for logging to see what has actually been sent to the server and then determine what the problem is.
Enable the data context log like this:
datacontext.Log = Console.Out;
As a side note, I've been processing 10 000 transactions per second in SQL Server, so I don't think that is the problem.
This is very dependent on what type of queries you are doing. You can have many queries requesting data which is already in a buffer, so that no disk read access is required or you can have reads, which actually require disk access. If you database is small and you have enough memory, you might have all the data in memory at all times - access would be very fast then, you might get 100+ queries/second. If you need to read a disk, you are dependant an you hardware. I have opted for an UltraSCSI-160 controller with UltraSCSI-160 drives, the fastest option you can get on a PC type platform. I process about 75'000 records every night (they get downloaded from another server). For each record I process, the program makes about 4 - 10 queries to put the new record into the correct 'slot'. The entire process takes about 3 minutes. I'm running this on an 850 MHz AMD Athlon machine with 768 MB of RAM.
Hope this gives you a little indication about the speed.
This is an old case of study, now there is 2017, and 2019 I am waiting to see what will happens
https://blogs.msdn.microsoft.com/sqlcat/2016/10/26/how-bwin-is-using-sql-server-2016-in-memory-oltp-to-achieve-unprecedented-performance-and-scale/
SQL Server 2016 1 200 000 batch requests/sec Memory-Optimized Table with LOB support, Natively Compiled stored procedures
To get benchmark tests for SQL Server and other RDBMS, visit the Processing Performance Council Web
You can also use Sql Server profile to check how your querys are executed