I know this question has been asked many times however none of the answers fit my issue.
I have a thread timer firing every 30 seconds that queries a MSSQL db that is under heavy load. If i need to update the data in the console app that i'm using i use Linq To Sql to update the data stored in memory.
My problem is sometimes I get the error ExecuteReader requires an open and available Connection.
The code from the thread timer fires a Thread.Run(reload());
The connection string is
//example code
void reload(...
string connstring = string.Format("Data Source={0},{1};Initial Catalog={2};User ID={3};Password={4};Application Name={5};Connect Timeout=120;MultipleActiveResultSets=True;Max Pool Size=1524;Pooling=true;"
settings = new ConnectionStringSettings("sqlServer", connstring, "System.Data.SqlClient");
using (var tx = new TransactionScope(TransactionScopeOption.Required,
new TransactionOptions() { IsolationLevel = IsolationLevel.ReadUncommitted }))
{
using (SwitchDataDataContext data = new SwitchDataDataContext(settings.ConnectionString))
{
data.CommandTimeout = 560;
then i do many linqtosql searches. The exceptions happen from time to time but not always on the same query's. it's like the connections is opened and is forced closed.
Sometimes the exceptions says the current status is Open, Closed, Connecting. I add a larger ThreadPool to the SQL db but nothing seems to help.
i also have ADO in other parts of the program without any issues.
I believe that your problem is that the transaction scope also has a timeout. The default timeout is 1 minute according to this answer. So the transaction times out long before your command does (560 seconds = 9.3 minutes or so) . You will need to set the timeout property in the instance of the TransactionOptions object you are creating
new TransactionOptions()
{
IsolationLevel = IsolationLevel.ReadUncommitted,
Timeout = new TimeSpan(0,10,0) /* 10 Minutes */
}
You can verify that is indeed the issue by setting the TransactionScope timeout to a small value to force it to timeout.
I changed Linq To Sql to Entity Framework and received the same type of message. I believe the issues is lazy Loading. I was using the collections before it was ready on a different thread. I just added .Include("Lab") to my collection to load the entire collection and it seems to of fixed the issue.
Related
I'm running a large number of extracts from a SQL database via a C# Script Task within a SSIS package. The connection to the source is acquired from a connection manager within the package:
object rawConnection = Dts.Connections[sqlSpecItems["ConnectionManager"]].AcquireConnection(Dts.Transaction);
SqlConnection connectionFromCM = (SqlConnection)rawConnection;
(splSpecItems is a Dictionary object that supplies the name of the connection manager to use)
The ConnectionTimeout property of the connection manager is set to 0. The connection string generated for the CM is:
Data Source=MyDatabase;User ID=MyUserName;Initial Catalog=MyDatabaseName;Persist Security Info=True;Asynchronous Processing=True;Connect Timeout=0;Application Name=MyPackageApplicationName;
The connection is used to return a SqlDataReader object as follows:
private SqlDataReader GetDataReaderFromQuery(string sqlQueryToExecute)
{
// connect to server
SqlConnection sqlReaderSource = GetSourceSQLConnection();
// create command
SqlCommand sqlReaderCmd = new SqlCommand(sqlQueryToExecute, sqlReaderSource)
{
CommandType = CommandType.Text,
CommandTimeout = 0
};
// execute query to return data to reader
SqlDataReader sqlReader = sqlReaderCmd.ExecuteReader();
return sqlReader;
}
The operation fails with the error message:
Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
Logged start and end times for the operation are typically around 50s apart, but can be up to 120s.
The source database is a Business Central cloud hosted SQL db. The failures occur on a small number of specific extracts, the larger ones (although by no means large in absolute terms, c20k rows). When attempting to query these via SSMS, there is typically a delay as the data has to be fetched from the source into memory, which I suspect to be the reason for the timeout (notwithstanding the setting of timeout = 0 for the connection and command). Note that once the failure has happened once, the data is in memory and so it doesn't repeat if I restart the job.
I've looked at various answers on this site, as well as across other sites. Nothing I have seen so far has given me any idea of what I might try to resolve this problem. Any help would be appreciated.
So, I found the problem. Just in case it helps anyone else, the timeout wasn't throwing because of the reader. Rather, it was the SqlBulkCopy operation downstream of it that I pass the reader into. Adding:
bulkCopy.BulkCopyTimeout = 0;
Has cleared the problem...
I'm trying to figure out the best way to batch insert about 37k rows into my Sql Server using DAPPER.
My problem is that when I use Parallel.ForEach - the number of connections to the database increases over a short period of time - finally hitting nearly or about 100 ... which gives connection pool errors. If I force the max degree of parall then it's hit that max number and stays there.
Setting the maxdegree feels wrong.
It currently is doing about 10-20 inserts a second. This is also in a simple Console App - so there's no other database activity besides what's happening in my Parallel.ForEach loop.
Is using Parallel.ForEach the incorrect thing in this case because this is not-CPU bound?
Should I be using async/await ? If so, what stopping this from doing hundreds of db calls in one go?
Sample code which is basically what I'm doing.
var items = GetItemsFromSomewhere(); // Returns 37K items.
Parallel.ForEach(items => item)
{
using (var sqlConnection = new SqlConnection(_connectionString))
{
var result = sqlConnection.Execute(myQuery, new { ... } );
}
}
My (incorrect) understanding of this was that there should on be about 8 or so connections at any time to the db. The Connection Pool will release the connection (which remains instantiated in the Connection Pool, waiting to be used). And if the Execute takes .. i donno .. lets say even a 1 second (the longest running time for an insert was about 500ms .. and that's 1 in every 100 or so) ... that's ok .. that thread is blocked and chills until the Execute completes. Then the scope completes (and Dispose is auto called) and the connection closed. With the connection closed, the Parallel.ForEach then grabs the next item in the collection, goes to the connection pool and then grabs a spare connection (remember - we just closed one, a split second ago) ... rinse.repeat.
Is this wrong?
Notes:
.NET 4.5
Sql 2012
Console app.
Using Dapper.NET for sql code.
First of all: If it is about performance, use SqlBulkCopy. This works with SQL-Server. If you are using other database servers, they might have their own SqlBulkCopy-solution (Oracle has one).
SqlBulkCopy works like a bulk-select: One state opens one connection and streams all the data from the server to the client. With an insert, it works the other way arround: It streams all the new records from the client to the server.
See: https://msdn.microsoft.com/en-us/library/ex21zs8x(v=vs.110).aspx
If you insist of using parallellism, you might want to consider the follow code:
void BulkInsert<T>(object p)
{
IEnumerator<T> e = (IEnumerator<T>)p;
using (var sqlConnection = new SqlConnection(_connectionString))
{
while(true)
{
T item;
lock(e)
{
if (!e.MoveNext())
return;
item = e.Current;
}
var result = sqlConnection.Execute(myQuery, new { ... } );
}
}
}
Now create your own threads and invoke this method on these threads with one and the same parameter: The iterator which runs through your collection. Each threat opens its own connection once, starts inserting, and after all items are inserted, the connection is closed. This solutions uses as many connections as your created threads.
PS: Multiple variants of above code are possible . You could call it from background threads, from Tasks, etc. I hope you get the point.
You should use SqlBulkCopy instead of inserting one by one. Faster and more efficient.
https://msdn.microsoft.com/en-us/library/ex21zs8x(v=vs.110).aspx
credits to the answer owner
Sql Bulk Copy/Insert in C#
Some first things that people learned in their early use of MySQL that closing connection right after its usage is important, but why is this so important? Well, if we do it on a website it can save some server resource (as described here) But why we should do that on a .NET desktop application? Does it share the same issues with web application? Or are there others?
If you use connection pooling you won't close the physical connection by calling con.Close, you just tell the pool that this connection can be used. If you call database stuff in a loop you'll get exceptions like "too many open connections" quickly if you don't close them.
Check this:
for (int i = 0; i < 1000; i++)
{
var con = new SqlConnection(Properties.Settings.Default.ConnectionString);
con.Open();
var cmd = new SqlCommand("Select 1", con);
var rd = cmd.ExecuteReader();
while (rd.Read())
Console.WriteLine("{0}) {1}", i, rd.GetInt32(0));
}
One of the possible exceptions:
Timeout expired. The timeout period elapsed prior to obtaining a
connection from the pool. This may have occurred because all pooled
connections were in use and max pool size was reached.
By the way, the same is true for a MySqlConnection.
This is the correct way, use the using statement on all types implementing IDsiposable:
using (var con = new SqlConnection(Properties.Settings.Default.ConnectionString))
{
con.Open();
for (int i = 0; i < 1000; i++)
{
using(var cmd = new SqlCommand("Select 1", con))
using (var rd = cmd.ExecuteReader())
while (rd.Read())
Console.WriteLine("{0}) {1}", i, rd.GetInt32(0));
}
}// no need to close it with the using statement, will be done in connection.Dispose
Yes I think it is important to close out your connection rather than leaving it open or allowing the garbage collector to eventually handle it. There are a couple of reason why you should do this and below that I'll describe the best method for how
WHY:
So you've opened a connection to the database and sent some data back and forth along this pipeline and now have the results you were looking for. Ideally at this point you do something else with the data and the end results of your application is achieved.
Once you have the data from the database you don't need it anymore, its part in this is done so leaving the connection open does nothing but hold up memory and increase the number of connections the database and your application has to keep track of and possibly pushing you closer to your maximum number of connections limit.
"But wait! I have to make a lot of database calls in rapid
succession!"
Okay no problem, open the connection run your calls and then close it out again. Opening a connection to a database in a "modern" application isn't going to cost you a significant amount of computing power/time, while explicitly closing out a connection does nothing but help (frees up memory, lowers your number of current connections).
So that is the why, here is the how
HOW:
So depending on how you are connecting to your MySQL database you a probably using an IDisposible object to help manage the connection. Here is what MSDN has to say on using an IDisposable:
As a rule, when you use an IDisposable object, you should declare and
instantiate it in a using statement. The using statement calls the
Dispose method on the object in the correct way, and (when you use it
as shown earlier) it also causes the object itself to go out of scope
as soon as Dispose is called. Within the using block, the object is
read-only and cannot be modified or reassigned.
Here is my personal take on the subject:
Using a using block helps to keep your code cleaner (readability)
Using a usingblock helps to keep your code clear (memory wise), it will "automagically" clean up unused items
With a usingblock it helps to prevent using a previous connection from being used accidentally as it will automatically close out the connection when you are done with it.
In short, I think it is important to close connections properly, preferably with a con.close() type statement method in combination with a using block
As pointed out in the comments this is also a very good question/answer similar to yours: Why always close Database connection?
I'd like to know the correct approach for running two simultaneous queries using NHibernate. Right now, I have a single ISession object that I use for all my queries:
session = sessionFactory.OpenSession();
In one thread, I'm loading some data which takes 10-15 seconds, but I don't need it right away so I don't want to block the entire program while it's loading:
IDbCommand cmd = session.Connection.CreateCommand();
cmd.CommandType = CommandType.TableDirect;
cmd.CommandText = "RecipesForModelingGraph";
IDataReader reader = cmd.ExecuteReader();
while (reader.Read())
{
// Do stuff
}
reader.Close();
This works fine, however in another thread I might be running a query such as:
var newBlah = new Blah();
session.Save(newBlah);
When the above transaction commits, I occasionally get an exception:
Additional information: There is already an open DataReader associated
with this Command which must be closed first.
Now, I thought maybe this was because I was running everything in the same transaction. So, I surrounded all my loading code with:
using (ITransaction transaction = session.BeginTransaction(IsolationLevel.Serializable))
{
// Same DataReader code as above
}
However, the problem has not gone away. I'm thinking maybe I need each thread to have its own ISession object. Is this the correct approach, or am I doing something wrong. Note, I only want a single open connection to the database. Also, keep in mind the background thread is only loading data and nothing else, so I'm not worried about isolation levels and data changing as its being read.
The session is tied to the thread and the Commands created are linked to the sessions connection object. So yes, if a commit or close is executed while an open reader exists you will get an exception.
You could Join() your threads and wait until all are complete before closing/committing.
I have a ASP.net based application.
The CPU on the SQL Server box is constantly ~90 - 100%
There are a lot of inneficient queries, which I am currently working on, however, looking at the code from a previous coder, he never seemed to close (or dispose) the SqlConnection
When I run the folloing query, I get around 450 connections that are "Awaiting Command"
SELECT Count(*) FROM
MASTER.DBO.SYSPROCESSES WHERE
DB_NAME(DBID) = 'CroCMS' AND DBID != 0
AND cmd = 'AWAITING COMMAND'
Is this likely to be causing a problem?
I read this and it seems to relate:
http://www.pythian.com/news/1270/sql-server-understanding-and-controlling-connection-pooling-fragmentation/
We are also getting a lot of timeouts, specifically when replication is enabled..
I'm not sure if this is related.. Have disabled replication (transactional) for now and it seems ok..
(This server is a subscriber to our in office Database server)
Would disposing of the SQL connection object help?
Yes, dispose them. Otherwise ignore them for now. Possibly the pool is as large because the statements are slow. I would more suggest:
Fixing the statements.
Check the applicaion that it only uses one connection PER REQUEST (i.e. not open multiple at the same time).
If the problem does not get better after optiomizing SQL - you can revisit the pool.
You should always dispose the command object when your done with it. that way the connection pooling can be used better.
easist is to use the using statment.
using (
var sqlCommand = new SqlCommand(
"storedprocname",
new SqlConnection("connectionstring"))
{ CommandType = CommandType.StoredProcedure })
{
// do what you should.. setting params executing etc etc.
}